Stochastic Process

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 56

[1]

s.

ges the original quantum state. With entangled particles, such measurements affect the
entangled system as a whole.

Such phenomena were the subject of a 1935 paper by Albert Einstein, Boris Podolsky,
[2] [3][4]
and Nathan Rosen, and several papers by Erwin Schrödinger shortly thereafter,
describing what came to be known as the EPR paradox. Einstein and others considered
such behavior impossible, as it violated the local realism view of causality (Einstein
[5]
referring to it as "spooky action at a distance") and argued that the accepted
formulation of quantum mechanics must therefore be incomplete.

[6][7]
Later, however, the counterintuitive predictions of quantum mechanics were verified
[8]
in tests where polarization or spin of entangled particles were measured at separate
locations, statistically violating Bell's inequality. In earlier tests, it could not be ruled out
that the result at one point could have been subtly transmitted to the remote point,
[8]
affecting the outcome at the second location. However, so-called "loophole-free" Bell
tests have since been performed where the locations were sufficiently separated that
communications at the speed of light would have taken longer—in one case, 10,000
[7][6]
times longer—than the interval between the measurements.

According to some interpretations of quantum mechanics, the effect of one


measurement occurs instantly. Other interpretations which do not recognize
wavefunction collapse dispute that there is any "effect" at all. However, all
interpretations agree that entanglement produces correlation between the
measurements, and that the mutual information between the entangled particles can be
exploited, but that any transmission of information at faster-than-light speeds is
[9][10]
impossible. Thus, despite popular thought to the contrary, quantum entanglement
[11]
cannot be used for faster-than-light communication.

[12][13]
Quantum entanglement has been demonstrated experimentally with photons,
[14][15] [16] [17] [18]
electrons, top quarks, molecules and even small diamonds. The use of
entanglement in communication, computation and quantum radar is an active area of
research and development.

History[edit]
Further information: Hidden-variable theory

Article headline regarding the Einstein–Podolsky–Rosen (EPR) paradox paper, in the May 4,
1935 issue of The New York Times

In 1935, Albert Einstein, Boris Podolsky and Nathan Rosen published a paper on the
counterintuitive predictions that quantum mechanics makes for pairs of objects prepared
[2]
together in a particular way. In this study, the three formulated the Einstein–Podolsky–
Rosen paradox (EPR paradox), a thought experiment that attempted to show that "the
quantum-mechanical description of physical reality given by wave functions is not
[2]
complete." However, the three scientists did not coin the word entanglement, nor did
they generalize the special properties of the quantum state they considered. Following
the EPR paper, Erwin Schrödinger wrote a letter to Einstein in German in which he used
the word Verschränkung (translated by himself as entanglement) "to describe the
correlations between two particles that interact and then separate, as in the EPR
[19]
experiment." However, Schrödinger had discussed the phenomenon as early as
[20]
1932.

Schrödinger shortly thereafter published a seminal paper defining and discussing the
notion of "entanglement." In the paper, he recognized the importance of the concept,
[3]
and stated: "I would not call [entanglement] one but rather the characteristic trait of
quantum mechanics, the one that enforces its entire departure from classical lines of
thought." Like Einstein, Schrödinger was dissatisfied with the concept of entanglement,
because it seemed to violate the speed limit on the transmission of information implicit
[21]
in the theory of relativity. Einstein later famously derided entanglement as "spukhafte
[22]
Fernwirkung" or "spooky action at a distance."
The EPR paper generated significant interest among physicists, which inspired much
discussion about the foundations of quantum mechanics and Bohm's interpretation in
particular, but produced relatively little other published work. Despite the interest, the
weak point in EPR's argument was not discovered until 1964, when John Stewart Bell
proved that one of their key assumptions, the principle of locality, as applied to the kind
of hidden variables interpretation hoped for by EPR, was mathematically inconsistent
with the predictions of quantum theory.

Specifically, Bell demonstrated an upper limit, seen in Bell's inequality, regarding the
strength of correlations that can be produced in any theory obeying local realism, and
showed that quantum theory predicts violations of this limit for certain entangled
[23]
systems. His inequality is experimentally testable, and there have been numerous
relevant experiments, starting with the pioneering work of Stuart Freedman and John
[24] [25]
Clauser in 1972 and Alain Aspect's experiments in 1982.

[12][13]
An early experimental breakthrough was due to Carl Kocher, who already in 1967
presented an apparatus in which two photons successively emitted from a calcium atom
were shown to be entangled – the first case of entangled visible light. The two photons
passed diametrically positioned parallel polarizers with higher probability than classically
predicted but with correlations in quantitative agreement with quantum mechanical
calculations. He also showed that the correlation varied as the squared cosine of the
[13]
angle between the polarizer settings and decreased exponentially with time lag
[26]
between emitted photons. Kocher's apparatus, equipped with better polarizers, was
used by Freedman and Clauser who could confirm the cosine-squared dependence and
[24]
use it to demonstrate a violation of Bell's inequality for a set of fixed angles. All these
experiments have shown agreement with quantum mechanics rather than the principle
of local realism.

For decades, each had left open at least one loophole by which it was possible to
question the validity of the results. However, in 2015 an experiment was performed that
simultaneously closed both the detection and locality loopholes, and was heralded as
"loophole-free"; this experiment ruled out a large class of local realism theories with
[27]
certainty. Aspect writes that "... no experiment ... can be said to be totally loophole-
free," but he says the experiments "remove the last doubts that we should renounce"
local hidden variables, and refers to examples of remaining loopholes as being "far
[28]
fetched" and "foreign to the usual way of reasoning in physics."

Bell's work raised the possibility of using these super-strong correlations as a resource
for communication. It led to the 1984 discovery of quantum key distribution protocols,
[29]
most famously BB84 by Charles H. Bennett and Gilles Brassard and E91 by Artur
[30]
Ekert. Although BB84 does not use entanglement, Ekert's protocol uses the violation
of a Bell's inequality as a proof of security.
In 2022, the Nobel Prize in Physics was awarded to Alain Aspect, John Clauser, and
Anton Zeilinger "for experiments with entangled photons, establishing the violation of
[31]
Bell inequalities and pioneering quantum information science".

Concept[edit]
Meaning of entanglement[edit]

An entangled system can be defined to be one whose quantum state cannot be factored
as a product of states of its local constituents; that is to say, they are not individual
particles but are an inseparable whole. In entanglement, one constituent cannot be fully
described without considering the other(s). The state of a composite system is always
expressible as a sum, or superposition, of products of states of local constituents; it is
entangled if this sum cannot be written as a single product term.

Quantum systems can become entangled through various types of interactions. For
some ways in which entanglement may be achieved for experimental purposes, see the
section below on methods. Entanglement is broken when the entangled particles
decohere through interaction with the environment; for example, when a measurement
[32]
is made.

As an example of entanglement: a subatomic particle decays into an entangled pair of


other particles. The decay events obey the various conservation laws, and as a result,
the measurement outcomes of one daughter particle must be highly correlated with the
measurement outcomes of the other daughter particle (so that the total momenta,
angular momenta, energy, and so forth remains roughly the same before and after this
process). For instance, a spin-zero particle could decay into a pair of spin-1/2 particles.
Since the total spin before and after this decay must be zero (conservation of angular
momentum), whenever the first particle is measured to be spin up on some axis, the
other, when measured on the same axis, is always found to be spin down. (This is
called the spin anti-correlated case; and if the prior probabilities for measuring each spin
are equal, the pair is said to be in the singlet state.)

The above result may or may not be perceived as surprising. A classical system would
display the same property, and a hidden variable theory would certainly be required to
do so, based on conservation of angular momentum in classical and quantum
mechanics alike. The difference is that a classical system has definite values for all the
observables all along, while the quantum system does not. In a sense to be discussed
below, the quantum system considered here seems to acquire a probability distribution
for the outcome of a measurement of the spin along any axis of the other particle upon
measurement of the first particle. This probability distribution is in general different from
what it would be without measurement of the first particle. This may certainly be
perceived as surprising in the case of spatially separated entangled particles.

Paradox[edit]

The paradox is that a measurement made on either of the particles apparently collapses
the state of the entire entangled system—and does so instantaneously, before any
information about the measurement result could have been communicated to the other
particle (assuming that information cannot travel faster than light) and hence assured
the "proper" outcome of the measurement of the other part of the entangled pair. In the
Copenhagen interpretation, the result of a spin measurement on one of the particles is a
collapse (of wave function) into a state in which each particle has a definite spin (either
up or down) along the axis of measurement. The outcome is taken to be random, with
each possibility having a probability of 50%. However, if both spins are measured along
the same axis, they are found to be anti-correlated. This means that the random
outcome of the measurement made on one particle seems to have been transmitted to
[33]
the other, so that it can make the "right choice" when it too is measured.

The distance and timing of the measurements can be chosen so as to make the interval
between the two measurements spacelike, hence, any causal effect connecting the
events would have to travel faster than light. According to the principles of special
relativity, it is not possible for any information to travel between two such measuring
events. It is not even possible to say which of the measurements came first. For two
spacelike separated events x1 and x2 there are inertial frames in which x1 is first and
others in which x2 is first. Therefore, the correlation between the two measurements
cannot be explained as one measurement determining the other: different observers
would disagree about the role of cause and effect.

(In fact similar paradoxes can arise even without entanglement: the position of a single
particle is spread out over space, and two widely separated detectors attempting to
detect the particle in two different places must instantaneously attain appropriate
correlation, so that they do not both detect the particle.)

Hidden variables theory[edit]

A possible resolution to the paradox is to assume that quantum theory is incomplete,


[34]
and the result of measurements depends on predetermined "hidden variables". The
state of the particles being measured contains some hidden variables, whose values
effectively determine, right from the moment of separation, what the outcomes of the
spin measurements are going to be. This would mean that each particle carries all the
required information with it, and nothing needs to be transmitted from one particle to the
other at the time of measurement. Einstein and others (see the previous section)
originally believed this was the only way out of the paradox, and the accepted quantum
mechanical description (with a random measurement outcome) must be incomplete.

Violations of Bell's inequality[edit]

Local hidden variable theories fail, however, when measurements of the spin of
entangled particles along different axes are considered. If a large number of pairs of
such measurements are made (on a large number of pairs of entangled particles), then
statistically, if the local realist or hidden variables view were correct, the results would
always satisfy Bell's inequality. A number of experiments have shown in practice that
Bell's inequality is not satisfied. However, prior to 2015, all of these experiments had
loophole problems that were considered the most important by the community of
[35][36]
physicists. When measurements of the entangled particles are made in moving
relativistic reference frames, in which each measurement (in its own relativistic time
[37][38]
frame) occurs before the other, the measurement results remain correlated.

The fundamental issue about measuring spin along different axes is that these
measurements cannot have definite values at the same time―they are incompatible in
the sense that these measurements' maximum simultaneous precision is constrained by
the uncertainty principle. This is contrary to what is found in classical physics, where
any number of properties can be measured simultaneously with arbitrary accuracy. It
has been proven mathematically that compatible measurements cannot show Bell-
[39]
inequality-violating correlations, and thus entanglement is a fundamentally non-
classical phenomenon.

Notable experimental results proving quantum


entanglement[edit]

The first experiment that verified Einstein's spooky action at a distance (entanglement)
was successfully corroborated in a lab by Chien-Shiung Wu and colleague I. Shaknov in
1949, and was published on New Year's Day in 1950. The result specifically proved the
[40]
quantum correlations of a pair of photons. In experiments in 2012 and 2013,
[41][42]
polarization correlation was created between photons that never coexisted in time.
The authors claimed that this result was achieved by entanglement swapping between
two pairs of entangled photons after measuring the polarization of one photon of the
early pair, and that it proves that quantum non-locality applies not only to space but also
to time.
In three independent experiments in 2013, it was shown that classically communicated
[43]
separable quantum states can be used to carry entangled states. The first loophole-
free Bell test was held by Ronald Hanson of the Delft University of Technology in 2015,
[44]
confirming the violation of Bell inequality.

In August 2014, Brazilian researcher Gabriela Barreto Lemos and team were able to
"take pictures" of objects using photons that had not interacted with the subjects, but
were entangled with photons that did interact with such objects. Lemos, from the
University of Vienna, is confident that this new quantum imaging technique could find
application where low light imaging is imperative, in fields such as biological or medical
[45]
imaging.

Since 2016, various companies, for example IBM and Microsoft, have created quantum
computers that allowed developers and tech enthusiasts to freely experiment with
[46]
concepts of quantum mechanics including quantum entanglement.

Emergence of time from quantum entanglement[edit]

There is a fundamental conflict, referred to as the problem of time, between the way
the concept of time is used in quantum mechanics, and the role it plays in general
relativity. In standard quantum theories time acts as an independent background
through which states evolve, with the Hamiltonian operator acting as the generator of
[47]
infinitesimal translations of quantum states through time.

In contrast, general relativity treats time as a dynamical variable which relates directly
with matter and moreover requires the Hamiltonian constraint to vanish. In quantized
general relativity, the quantum version of the Hamiltonian constraint using metric
variables, leads to the Wheeler–DeWitt equation:

H^(x)|ψ⟩=0

where

H^(x)

is the Hamiltonian constraint and


|ψ⟩

stands for the wave function of the universe. The operator

H^

acts on the Hilbert space of wave functions, but it is not the same Hilbert space as in
the nonrelativistic case. This Hamiltonian no longer determines the evolution of the
system because the Schrödinger equation:

H^|ψ⟩=iℏ∂∂t|ψ⟩

, ceases to be valid. This property is known as timelessness. Various


attempts to incorporate time in a fully quantum framework have been made, starting
[48][49]
with the Page and Wootters mechanism and other subsequent proposals.

The emergence of time was also proposed as arising from quantum correlations
between an evolving system and a reference quantum clock system, the concept of
system-time entanglement is introduced as a quantifier of the actual distinguishable
[50][51] [52][53]
evolution undergone by the system.

Emergent gravity[edit]

Based on AdS/CFT correspondence, Mark Van Raamsdonk suggested that spacetime


arises as an emergent phenomenon of the quantum degrees of freedom that are
[54]
entangled and live in the boundary of the spacetime. Induced gravity can emerge
[55][56]
from the entanglement first law.

Non-locality and entanglement[edit]


In the media and popular science, quantum non-locality is often portrayed as being
equivalent to entanglement. While this is true for pure bipartite quantum states, in
general entanglement is only necessary for non-local correlations, but there exist mixed
[57]
entangled states that do not produce such correlations. A well-known example is the
Werner states that are entangled for certain values of
psym

[58]
, but can always be described using local hidden variables. Moreover, it was
shown that, for arbitrary numbers of particles, there exist states that are genuinely
[59]
entangled but admit a local model.

The mentioned proofs about the existence of local models assume that there is only one
copy of the quantum state available at a time. If the particles are allowed to perform
local measurements on many copies of such states, then many apparently local states
(e.g., the qubit Werner states) can no longer be described by a local model. This is, in
particular, true for all distillable states. However, it remains an open question whether all
[60]
entangled states become non-local given sufficiently many copies.

Entanglement of a state shared by two particles is necessary, but not sufficient for that
state to be non-local. Entanglement is more commonly viewed as an algebraic concept,
noted for being a prerequisite to non-locality as well as to quantum teleportation and to
superdense coding, whereas non-locality is defined according to experimental statistics
and is much more involved with the foundations and interpretations of quantum
[61]
mechanics.

Quantum-mechanical framework[edit]
The following subsections are for those with a good working knowledge of the formal,
mathematical description of quantum mechanics, including familiarity with the formalism
and theoretical framework developed in the articles: bra–ket notation and mathematical
formulation of quantum mechanics.

Pure states[edit]

Consider two arbitrary quantum systems A and B, with respective Hilbert spaces HA
and HB. The Hilbert space of the composite system is the tensor product

HA⊗HB.

If the first system is in state


|ψ⟩A

and the second in state

|ϕ⟩B

, the state of the composite system is

|ψ⟩A⊗|ϕ⟩B.

States of the composite system that can be represented in this form are called
separable states, or product states.

Not all states are separable states (and thus product states). Fix a basis

{|i⟩A}

for HA and a basis

{|j⟩B}

for HB. The most general state in HA ⊗ HB is of the form

|ψ⟩AB=∑i,jcij|i⟩A⊗|j⟩B

This state is separable if there exist vectors

[ciA],[cjB]

so that

cij=ciAcjB,
yielding

|ψ⟩A=∑iciA|i⟩A

and

|ϕ⟩B=∑jcjB|j⟩B.

It is inseparable if for any vectors

[ciA],[cjB]

at least for one pair of coordinates

ciA,cjB

we have

cij≠ciAcjB.

If a state is inseparable, it is called an 'entangled state'.

For example, given two basis vectors

{|0⟩A,|1⟩A}

of HA and two basis vectors

{|0⟩B,|1⟩B}

of HB, the following is an entangled state:

12(|0⟩A⊗|1⟩B−|1⟩A⊗|0⟩B).
If the composite system is in this state, it is impossible to attribute to either system A or
system B a definite pure state. Another way to say this is that while the von Neumann
entropy of the whole state is zero (as it is for any pure state), the entropy of the
subsystems is greater than zero. In this sense, the systems are "entangled". This has
[62]
specific empirical ramifications for interferometry. The above example is one of four
Bell states, which are (maximally) entangled pure states (pure states of the HA ⊗ HB
space, but which cannot be separated into pure states of each HA and HB).

Now suppose Alice is an observer for system A, and Bob is an observer for system B. If
in the entangled state given above Alice makes a measurement in the

{|0⟩,|1⟩}

eigenbasis of A, there are two possible outcomes, occurring with equal


[63]
probability:

1. Alice measures 0, and the state of the system collapses to


2. |0⟩A|1⟩B
3. .
4. Alice measures 1, and the state of the system collapses to
5. |1⟩A|0⟩B
6. .

If the former occurs, then any subsequent measurement performed by Bob, in the same
basis, will always return 1. If the latter occurs, (Alice measures 1) then Bob's
measurement will return 0 with certainty. Thus, system B has been altered by Alice
performing a local measurement on system A. This remains true even if the systems A
and B are spatially separated. This is the foundation of the EPR paradox.

The outcome of Alice's measurement is random. Alice cannot decide which state to
collapse the composite system into, and therefore cannot transmit information to Bob by
acting on her system. Causality is thus preserved, in this particular scheme. For the
general argument, see no-communication theorem.

Ensembles[edit]
As mentioned above, a state of a quantum system is given by a unit vector in a Hilbert
space. More generally, if one has less information about the system, then one calls it an
'ensemble' and describes it by a density matrix, which is a positive-semidefinite matrix,
or a trace class when the state space is infinite-dimensional, and has trace 1. Again, by
the spectral theorem, such a matrix takes the general form:

ρ=∑iwi|αi⟩⟨αi|,

where the wi are positive-valued probabilities (they sum up to 1), the vectors αi are unit
vectors, and in the infinite-dimensional case, we would take the closure of such states in
the trace norm. We can interpret ρ as representing an ensemble where

wi

is the proportion of the ensemble whose states are

|αi⟩

. When a mixed state has rank 1, it therefore describes a 'pure ensemble'. When
there is less than total information about the state of a quantum system we need density
matrices to represent the state.

Experimentally, a mixed ensemble might be realized as follows. Consider a "black box"


apparatus that spits electrons towards an observer. The electrons' Hilbert spaces are
identical. The apparatus might produce electrons that are all in the same state; in this
case, the electrons received by the observer are then a pure ensemble. However, the
apparatus could produce electrons in different states. For example, it could produce two
populations of electrons: one with state

|z+⟩

with spins aligned in the positive z direction, and the other with state

|y−⟩

with spins aligned in the negative y direction. Generally, this is a mixed ensemble,
as there can be any number of populations, each corresponding to a different state.
Following the definition above, for a bipartite composite system, mixed states are just
density matrices on HA ⊗ HB. That is, it has the general form

ρ=∑iwi[∑jc¯ij(|αij⟩⊗|βij⟩)][∑kcik(⟨αik|⊗⟨βik|)]

where the wi are positively valued probabilities,

∑j|cij|2=1

, and the vectors are unit vectors. This is self-adjoint and positive and has
trace 1.

Extending the definition of separability from the pure case, we say that a mixed state is
[64]: 131–132
separable if it can be written as

ρ=∑iwiρiA⊗ρiB,

where the wi are positively valued probabilities and the

ρiA

's and

ρiB

's are themselves mixed states (density operators) on the subsystems A and B
respectively. In other words, a state is separable if it is a probability distribution over
uncorrelated states, or product states. By writing the density matrices as sums of pure
ensembles and expanding, we may assume without loss of generality that

ρiA
and

ρiB

are themselves pure ensembles. A state is then said to be entangled if it is not


separable.

In general, finding out whether or not a mixed state is entangled is considered difficult.
[65]
The general bipartite case has been shown to be NP-hard. For the 2 × 2 and 2 × 3
cases, a necessary and sufficient criterion for separability is given by the famous
[66]
Positive Partial Transpose (PPT) condition.

Reduced density matrices[edit]

[67]
The idea of a reduced density matrix was introduced by Paul Dirac in 1930. Consider
as above systems A and B each with a Hilbert space HA, HB. Let the state of the
composite system be

|Ψ⟩∈HA⊗HB.

As indicated above, in general there is no way to associate a pure state to the


component system A. However, it still is possible to associate a density matrix. Let

ρT=|Ψ⟩⟨Ψ|

which is the projection operator onto this state. The state of A is the partial trace of ρT
over the basis of system B:

ρA =def ∑jNB(IA⊗⟨j|B)(|Ψ⟩⟨Ψ|)(IA⊗|j⟩B)=TrBρT.

The sum occurs over


NB:=dim⁡(HB)

and

IA

the identity operator in

HA

. ρA is sometimes called the reduced density matrix of ρ on subsystem A.


Colloquially, we "trace out" system B to obtain the reduced density matrix on A.

For example, the reduced density matrix of A for the entangled state

12(|0⟩A⊗|1⟩B−|1⟩A⊗|0⟩B),

discussed above is

ρA=12(|0⟩A⟨0|A+|1⟩A⟨1|A).

This demonstrates that, as expected, the reduced density matrix for an entangled pure
ensemble is a mixed ensemble. Also not surprisingly, the density matrix of A for the
pure product state

|ψ⟩A⊗|ϕ⟩B

discussed above is

ρA=|ψ⟩A⟨ψ|A

In general, a bipartite pure state ρ is entangled if and only if its reduced states are
mixed rather than pure.
Two applications that use them[edit]

Reduced density matrices were explicitly calculated in different spin chains with unique
[68]
ground state. An example is the one-dimensional AKLT spin chain: the ground state
can be divided into a block and an environment. The reduced density matrix of the block
is proportional to a projector to a degenerate ground state of another Hamiltonian.

The reduced density matrix also was evaluated for XY spin chains, where it has full
rank. It was proved that in the thermodynamic limit, the spectrum of the reduced density
[69]
matrix of a large block of spins is an exact geometric sequence in this case.

Entanglement as a resource[edit]

In quantum information theory, entangled states are considered a 'resource', i.e.,


[70]
something costly to produce and that allows implementing valuable transformations.
[71]
The setting in which this perspective is most evident is that of "distant labs", i.e., two
quantum systems labeled "A" and "B" on each of which arbitrary quantum operations
can be performed, but which do not interact with each other quantum mechanically. The
only interaction allowed is the exchange of classical information, which combined with
the most general local quantum operations gives rise to the class of operations called
LOCC (local operations and classical communication). These operations do not allow
the production of entangled states between systems A and B. But if A and B are
provided with a supply of entangled states, then these, together with LOCC operations
can enable a larger class of transformations. For example, an interaction between a
qubit of A and a qubit of B can be realized by first teleporting A's qubit to B, then letting
it interact with B's qubit (which is now a LOCC operation, since both qubits are in B's
lab) and then teleporting the qubit back to A. Two maximally entangled states of two
qubits are used up in this process. Thus entangled states are a resource that enables
the realization of quantum interactions (or of quantum channels) in a setting where only
LOCC are available, but they are consumed in the process. There are other applications
where entanglement can be seen as a resource, e.g., private communication or
[72]
distinguishing quantum states.

Classification of entanglement[edit]

Not all quantum states are equally valuable as a resource. To quantify this value,
different entanglement measures (see below) can be used, that assign a numerical
value to each quantum state. However, it is often interesting to settle for a coarser way
to compare quantum states. This gives rise to different classification schemes. Most
entanglement classes are defined based on whether states can be converted to other
states using LOCC or a subclass of these operations. The smaller the set of allowed
operations, the finer the classification. Important examples are:

● If two states can be transformed into each other by a local unitary operation,
they are said to be in the same LU class. This is the finest of the usually
considered classes. Two states in the same LU class have the same value for
entanglement measures and the same value as a resource in the distant-labs
setting. There is an infinite number of different LU classes (even in the
[73][74]
simplest case of two qubits in a pure state).
● If two states can be transformed into each other by local operations including
measurements with probability larger than 0, they are said to be in the same
'SLOCC class' ("stochastic LOCC"). Qualitatively, two states
● ρ1
● and
● ρ2
● in the same SLOCC class are equally powerful (since I can transform one
into the other and then do whatever it allows me to do), but since the
transformations
● ρ1→ρ2
● and
● ρ2→ρ1
● may succeed with different probability, they are no longer equally
valuable. E.g., for two pure qubits there are only two SLOCC classes: the
entangled states (which contains both the (maximally entangled) Bell states
and weakly entangled states like
● |00⟩+0.01|11⟩
● ) and the separable ones (i.e., product states like
● |00⟩
[75][76]
● ).
● Instead of considering transformations of single copies of a state (like
● ρ1→ρ2
● ) one can define classes based on the possibility of multi-copy
transformations. E.g., there are examples when
● ρ1→ρ2
● is impossible by LOCC, but
● ρ1⊗ρ1→ρ2
● is possible. A very important (and very coarse) classification is
based on the property whether it is possible to transform an arbitrarily large
number of copies of a state
● ρ
● into at least one pure entangled state. States that have this property are
called distillable. These states are the most useful quantum states since,
given enough of them, they can be transformed (with local operations) into
any entangled state and hence allow for all possible uses. It came initially as
a surprise that not all entangled states are distillable, those that are not are
[77][72]
called 'bound entangled'.

A different entanglement classification is based on what the quantum correlations


present in a state allow A and B to do: one distinguishes three subsets of entangled
states: (1) the non-local states, which produce correlations that cannot be explained by
a local hidden variable model and thus violate a Bell inequality, (2) the steerable states
that contain sufficient correlations for A to modify ("steer") by local measurements the
conditional reduced state of B in such a way, that A can prove to B that the state they
possess is indeed entangled, and finally (3) those entangled states that are neither non-
[78]
local nor steerable. All three sets are non-empty.

Entropy[edit]

In this section, the entropy of a mixed state is discussed as well as how it can be viewed
as a measure of quantum entanglement.

Definition[edit]

The plot of von Neumann entropy Vs Eigenvalue for a bipartite 2-level pure state. When the
eigenvalue has value 0.5, von Neumann entropy is at a maximum, corresponding to maximum
entanglement.

In classical information theory H, the Shannon entropy, is associated to a probability


distribution,

p1,⋯,pn
[79]
, in the following way:

H(p1,⋯,pn)=−∑ipilog2⁡pi.

Since a mixed state ρ is a probability distribution over an ensemble, this leads naturally
to the definition of the von Neumann entropy:

S(ρ)=−Tr(ρlog2⁡ρ).

In general, one uses the Borel functional calculus to calculate a non-polynomial function
such as log2(ρ). If the nonnegative operator ρ acts on a finite-dimensional Hilbert space
and has eigenvalues

λ1,⋯,λn

, log2(ρ) turns out to be nothing more than the operator with the same
eigenvectors, but the eigenvalues

log2⁡(λ1),⋯,log2⁡(λn)

. The Shannon entropy is then:

S(ρ)=−Tr(ρlog2⁡ρ)=−∑iλilog2⁡λi

Since an event of probability 0 should not contribute to the entropy, and given that

limp→0plog⁡p=0,

the convention 0 log(0) = 0 is adopted. This extends to the infinite-dimensional case as


well: if ρ has spectral resolution
ρ=∫λdPλ,

assume the same convention when calculating

ρlog2⁡ρ=∫λlog2⁡λdPλ.

As in statistical mechanics, the more uncertainty (number of microstates) the system


should possess, the larger the entropy. For example, the entropy of any pure state is
zero, which is unsurprising since there is no uncertainty about a system in a pure state.
The entropy of any of the two subsystems of the entangled state discussed above is
log(2) (which can be shown to be the maximum entropy for 2 × 2 mixed states).

As a measure of entanglement[edit]

Entropy provides one tool that can be used to quantify entanglement, although other
[80][81]
entanglement measures exist. If the overall system is pure, the entropy of one
subsystem can be used to measure its degree of entanglement with the other
subsystems. For bipartite pure states, the von Neumann entropy of reduced states is
the unique measure of entanglement in the sense that it is the only function on the
[82]
family of states that satisfies certain axioms required of an entanglement measure.

It is a classical result that the Shannon entropy achieves its maximum at, and only at,
the uniform probability distribution {1/n,...,1/n}. Therefore, a bipartite pure state ρ ∈ HA
⊗ HB is said to be a maximally entangled state if the reduced state of each
subsystem of ρ is the diagonal matrix

[1n⋱1n].

For mixed states, the reduced von Neumann entropy is not the only reasonable
entanglement measure.
As an aside, the information-theoretic definition is closely related to entropy in the sense
[83]
of statistical mechanics (comparing the two definitions in the present context, it is
customary to set the Boltzmann constant k = 1). For example, by properties of the Borel
functional calculus, we see that for any unitary operator U,

S(ρ)=S(UρU∗).

Indeed, without this property, the von Neumann entropy would not be well-defined.

In particular, U could be the time evolution operator of the system, i.e.,

U(t)=exp⁡(−iHtℏ),

where H is the Hamiltonian of the system. Here the entropy is unchanged.

[84]
Rényi entropy also can be used as a measure of entanglement.

Entanglement measures[edit]

Entanglement measures quantify the amount of entanglement in a (often viewed as a


bipartite) quantum state. As aforementioned, entanglement entropy is the standard
measure of entanglement for pure states (but no longer a measure of entanglement for
mixed states). For mixed states, there are some entanglement measures in the
[80]
literature and no single one is standard.

● Entanglement cost
● Distillable entanglement
● Entanglement of formation
● Concurrence
● Relative entropy of entanglement
● Squashed entanglement
● Logarithmic negativity

Most (but not all) of these entanglement measures reduce for pure states to
entanglement entropy, and are difficult (NP-hard) to compute for mixed states as the
dimension of the entangled syste
Reconciliation of general relativity with the laws of quantum physics remains a problem,
howeea was pointed out by mathematician Marcel Grossmann and published by
[7]
Grossmann and Einstein in 1913. rmulate the earliest version of the Big Bang models,
[10]
in which our universe has evolved from an extremely hot and dense earlier state.
[11]
Einstein later declared the cosmological constant the biggest blunder of his life.

During that period, general relativity remained something of a curiosity among physical
theories. It was clearly superior to Newtonian gravity, being consistent with special
relativity and accounting for several effects unexplained by the Newtonian theory.
Einstein showed in 1915 how his theory explained the anomalous perihelion advance of
[12]
the planet Mercury without any arbitrary parameters ("fudge factors"), and in 1919 an
expedition led by Eddington confirmed general relativity's prediction for the deflection of
[13]
starlight by the Sun during the total solar eclipse of 29 May 1919, instantly making
[14]
Einstein famous. Yet the theory remained outside the mainstream of theoretical
physics and astrophysics until developments between approximately 1960 and 1975,
[15]
now known as the golden age of general relativity. Physicists began to understand
the concept of a black hole, and to identify quasars as one of these objects'
[16]
astrophysical manifestations. Ever more precise solar system tests confirmed the
[17]
theory's predictive power, and relativistic cosmology also became amenable to direct
[18]
observational tests.

[2][19][20]
General relativity has acquired a reputation as a theory of extraordinary beauty.
Subrahmanyan Chandrasekhar has noted that at multiple levels, general relativity
exhibits what Francis Bacon has termed a "strangeness in the proportion" (i.e. elements
that excite wonderment and surprise). It juxtaposes fundamental concepts (space and
time versus matter and motion) which had previously been considered as entirely
independent. Chandrasekhar also noted that Einstein's only guides in his search for an
exact theory were the principle of equivalence and his sense that a proper description of
gravity should be geometrical at its basis, so that there was an "element of revelation" in
[21]
the manner in which Einstein arrived at his theory. Other elements of beauty
associated with the general theory of relativity are its simplicity and symmetry, the
manner in which it incorporates invariance and unification, and its perfect logical
[22]
consistency.

In the preface to Relativity: The Special and the General Theory, Einstein said "The
present book is intended, as far as possible, to give an exact insight into the theory of
Relativity to those readers who, from a general scientific and philosophical point of view,
are interested in the theory, but who are not conversant with the mathematical
apparatus of theoretical physics. The work presumes a standard of education
corresponding to that of a university matriculation examination, and, despite the
shortness of the book, a fair amount of patience and force of will on the part of the
reader. The author has spared himself no pains in his endeavour to present the main

ideas in the simplest and most in

According to general relativity, objects in a gravitational field behave similarly to objects within an
accelerating enclosure. For example, an observer will see a ball fall the same way in a rocket
2
(left) as it does on Earth (right), provided that the acceleration of the rocket is equal to 9.8 m/s
(the acceleration due to gravity at the surface of the Earth).

At the base of classical mechanics is the notion that a body's motion can be described
as a combination of free (or inertial) motion, and deviations from this free motion. Such
deviations are caused by external forces acting on a body in accordance with Newton's
second law of motion, which states that the net force acting on a body is equal to that
[26]
body's (inertial) mass multiplied by its acceleration. The preferred inertial motions are
related to the geometry of space and time: in the standard reference frames of classical
mechanics, objects in free motion move along straight lines at constant speed. In
[27]
modern parlance, their paths are geodesics, straight world lines in curved spacetime.

Conversely, one might expect that inertial motions, once identified by observing the
actual motions of bodies and making allowances for the external forces (such as
electromagnetism or friction), can be used to define the geometry of space, as well as a
time coordinate. However, there is an ambiguity once gravity comes into play.
According to Newton's law of gravity, and independently verified by experiments such
as that of Eötvös and its successors (see Eötvös experiment), there is a universality of
free fall (also known as the weak equivalence principle, or the universal equality of
inertial and passive-gravitational mass): the trajectory of a test body in free fall depends
[28]
only on its position and initial speed, but not on any of its material properties. A
simplified version of this is embodied in Einstein's elevator experiment, illustrated in
the figure on the right: for an observer in an enclosed room, it is impossible to decide,
by mapping the trajectory of bodies such as a dropped ball, whether the room is
stationary in a gravitational field and the ball accelerating, or in free space aboard a
rocket that is accelerating at a rate equal to that of the gravitational field versus the ball
[29]
which upon release has nil acceleration.

Given the universality of free fall, there is no observable distinction between inertial
motion and motion under the influence of the gravitational force. This suggests the
definition of a new class of inertial motion, namely that of objects in free fall under the
influence of gravity. This new class of preferred motions, too, defines a geometry of
space and time—in mathematical terms, it is the geodesic motion associated with a
specific connection which depends on the gradient of the gravitational potential. Space,
in this construction, still has the ordinary Euclidean geometry. However, spacetime as a
whole is more complicated. As can be shown using simple thought experiments
following the free-fall trajectories of different test particles, the result of transporting
spacetime vectors that can denote a particle's velocity (time-like vectors) will vary with
the particle's trajectory; mathematically speaking, the Newtonian connection is not
integrable. From this, one can deduce that spacetime is curved. The resulting Newton–
Cartan theory is a geometric formulation of Newtonian gravity using only covariant
[30]
concepts, i.e. a description which is valid in any desired coordinate system. In this
geometric description, tidal effects—the relative acceleration of bodies in free fall—are
related to the derivative of the connection, showing how the modified geometry is
[31]
caused by the presence of mass.

Relativistic generalization[edit]

Light cone

As intriguing as geometric Newtonian gravity may be, its basis, classical mechanics, is
[32]
merely a limiting case of (special) relativistic mechanics. In the language of
symmetry: where gravity can be neglected, physics is Lorentz invariant as in special
relativity rather than Galilei invariant as in classical mechanics. (The defining symmetry
of special relativity is the Poincaré group, which includes translations, rotations, boosts
and reflections.) The differences between the two become significant when dealing with
[33]
speeds approaching the speed of light, and with high-energy phenomena.

With Lorentz symmetry, additional structures come into play. They are defined by the
set of light cones (see image). The light-cones define a causal structure: for each event
A, there is a set of events that can, in principle, either influence or be influenced by A
via signals or interactions that do not need to travel faster than light (such as event B in
the image), and a set of events for which such an influence is impossible (such as event
C in the image). These sets are observer-independent.[34] In conjunction with the world-
lines of freely falling particles, the light-cones can be used to reconstruct the
spacetime's semi-Riemannian metric, at least up to a positive scalar factor. In
[35]
mathematical terms, this defines a conformal structure or conformal geometry.

Special relativity is defined in the absence of gravity. For practical applications, it is a


suitable model whenever gravity can be neglected. Bringing gravity into play, and
assuming the universality of free fall motion, an analogous reasoning as in the previous
section applies: there are no global inertial frames. Instead there are approximate
inertial frames moving alongside freely falling particles. Translated into the language of
spacetime: the straight time-like lines that define a gravity-free inertial frame are
deformed to lines that are curved relative to each other, suggesting that the inclusion of
[36]
gravity necessitates a change in spacetime geometry.

A priori, it is not clear whether the new local frames in free fall coincide with the
reference frames in which the laws of special relativity hold—that theory is based on the
propagation of light, and thus on electromagnetism, which could have a different set of
preferred frames. But using different assumptions about the special-relativistic frames
(such as their being earth-fixed, or in free fall), one can derive different predictions for
the gravitational redshift, that is, the way in which the frequency of light shifts as the
light propagates through a gravitational field (cf. below). The actual measurements
show that free-falling frames are the ones in which light propagates as it does in special
[37]
relativity. The generalization of this statement, namely that the laws of special
relativity hold to good approximation in freely falling (and non-rotating) reference
frames, is known as the Einstein equivalence principle, a crucial guiding principle for
[38]
generalizing special-relativistic physics to include gravity.

The same experimental data shows that time as measured by clocks in a gravitational
field—proper time, to give the technical term—does not follow the rules of special
relativity. In the language of spacetime geometry, it is not measured by the Minkowski
metric. As in the Newtonian case, this is suggestive of a more general geometry. At
small scales, all reference frames that are in free fall are equivalent, and approximately
Minkowskian. Consequently, we are now dealing with a curved generalization of
Minkowski space. The metric tensor that defines the geometry—in particular, how
lengths and angles are measured—is not the Minkowski metric of special relativity, it is
a generalization known as a semi- or pseudo-Riemannian metric. Furthermore, each
Riemannian metric is naturally associated with one particular kind of connection, the
Levi-Civita connection, and this is, in fact, the connection that satisfies the equivalence
principle and makes space locally Minkowskian (that is, in suitable locally inertial
coordinates, the metric is Minkowskian, and its first partial derivatives and the
[39]
connection coefficients vanish).

Einstein's equations[edit]

Main articles: Einstein field equations and Mathematics of general relativity


Having formulated the relativistic, geometric version of the effects of gravity, the
question of gravity's source remains. In Newtonian gravity, the source is mass. In
special relativity, mass turns out to be part of a more general quantity called the
energy–momentum tensor, which includes both energy and momentum densities as
[40]
well as stress: pressure and shear. Using the equivalence principle, this tensor is
readily generalized to curved spacetime. Drawing further upon the analogy with
geometric Newtonian gravity, it is natural to assume that the field equation for gravity
relates this tensor and the Ricci tensor, which describes a particular class of tidal
effects: the change in volume for a small cloud of test particles that are initially at rest,
and then fall freely. In special relativity, conservation of energy–momentum corresponds
to the statement that the energy–momentum tensor is divergence-free. This formula,
too, is readily generalized to curved spacetime by replacing partial derivatives with their
curved-manifold counterparts, covariant derivatives studied in differential geometry.
With this additional condition—the covariant divergence of the energy–momentum
tensor, and hence of whatever is on the other side of the equation, is zero—the simplest
nontrivial set of equations are what are called Einstein's (field) equations:

Einstein's field equations


Gμν≡Rμν−12Rgμν=κTμν

On the left-hand side is the Einstein tensor,

Gμν

, which is symmetric and a specific divergence-free combination of the Ricci tensor

Rμν

and the metric. In particular,

R=gμνRμν

is the curvature scalar. The Ricci tensor itself is related to the more general Riemann
curvature tensor as

Rμν=Rαμαν.

On the right-hand side,


κ
is a constant and

Tμν

is the energy–momentum tensor. All tensors are written in abstract index notation.
[41]
Matching the theory's prediction to observational results for planetary orbits or,
equivalently, assuring that the weak-gravity, low-speed limit is Newtonian mechanics,
the proportionality constant

κ
is found to be

κ=8πGc4

, where

G
is the Newtonian constant of gravitation and

c
[42]
the speed of light in vacuum. When there is no matter present, so that the energy–
momentum tensor vanishes, the results are the vacuum Einstein equations,

Rμν=0.

In general relativity, the world line of a particle free from all external, non-gravitational
force is a particular type of geodesic in curved spacetime. In other words, a freely
moving or falling particle always moves along a geodesic.

The geodesic equation is:

d2xμds2+Γμαβdxαdsdxβds=0,

where

s
is a scalar parameter of motion (e.g. the proper time), and

Γμαβ

are Christoffel symbols (sometimes called the affine connection coefficients or


Levi-Civita connection coefficients) which is symmetric in the two lower indices. Greek
indices may take the values: 0, 1, 2, 3 and the summation convention is used for
repeated indices

α
and

. The quantity on the left-hand-side of this equation is the acceleration of a particle,


and so this equation is analogous to Newton's laws of motion which likewise provide
formulae for the acceleration of a particle. This equation of motion employs the Einstein
notation, meaning that repeated indices are summed (i.e. from zero to three). The
Christoffel symbols are functions of the four spacetime coordinates, and so are
independent of the velocity or acceleration or other characteristics of a test particle
whose motion is described by the geodesic equation.

Total force in general relativity[edit]

See also: Two-body problem in general relativity


In general relativity, the effective gravitational potential energy of an object of mass m
[43][44]
revolving around a massive central body M is given by

Uf(r)=−GMmr+L22mr2−GML2mc2r3

A conservative total force can then be obtained as its negative gradient

Ff(r)=−GMmr2+L2mr3−3GML2mc2r4

where L is the angular momentum. The first term represents the force of Newtonian
gravity, which is described by the inverse-square law. The second term represents the
centrifugal force in the circular motion. The third term represents the relativistic effect.
Alternatives to general relativity[edit]

Main article: Alternatives to general relativity


There are alternatives to general relativity built upon the same premises, which include
additional rules and/or constraints, leading to different field equations. Examples are
Whitehead's theory, Brans–Dicke theory, teleparallelism, f(R) gravity and Einstein–
[45]
Cartan theory.

Definition and basic applications[edit]


See also: Mathematics of general relativity and Physical theories modified by
general relativity
The derivation outlined in the preon's law of universal gravitation says that every
particle attracts every other particle in the universe with a force that is proportional to
the product of their masses and inversely proportional to the square of the distance
between their centers. Separated objects attract and are attracted as if all their mass
were concentrated at their centers. The publication of the law has become known as the
"first great unification", as it marked the unification of the previously described
[1][2][3]
phenomena of gravity on Earth with known astronomical behaviors.

This is a general physical law derived from empirical observations by what Isaac
[4]
Newton called inductive reasoning. It is a part of classical mechanics and was
formulated in Newton's work Philosophiæ Naturalis Principia Mathematica ("the
Principia"), first published on 5 July 1687.

The equation for universal gravitation thus takes the form:

F=Gm1m2r2,

where F is the gravitational force acting between two objects, m1 and m2 are the
masses of the objects, r is the distance between the centers of their masses, and G is
the gravitational constant.

The first test of Newton's law of gravitation between masses in the laboratory was the
[5]
Cavendish experiment conducted by the British scientist Henry Cavendish in 1798. It
took place 111 years after the publication of Newton's Principia and approximately 71
years after his death.

Newton's law of gravitation resembles Coulomb's law of electrical forces, which is used
to calculate the magnitude of the electrical force arising between two charged bodies.
Both are inverse-square laws, where force is inversely proportional to the square of the
distance between the bodies. Coulomb's law has charge in place of mass and a
different constant.

Newton's law has later been superseded by Albert Einstein's theory of general relativity,
but the universality of the gravitational constant is intact and the law still continues to be
used as an excellent approximation of the effects of gravity in most applications.
Relativity is required only when there is a need for extreme accuracy, or when dealing
with very strong gravitational fields, such as those found near extremely massive and
dense objects, or at small distances (such as Mercury's orbit around the Sun).

History[edit]
Main article: History of gravitational theory
Around 1600, the scientific method began to take root. René Descartes started over
with a more fundamental view, developing ideas of matter and action independent of
theology. Galileo Galilei wrote about experimental measurements of falling and rolling
objects. Johannes Kepler's laws of planetary motion summarized Tycho Brahe's
[6]: 132
astronomical observations.

Around 1666 Isaac Newton developed the idea that Kepler's laws must also apply to the
orbit of the Moon around the Earth and then to all objects on Earth. The analysis
required assuming that the gravitation force acted as if all of the mass of the Earth were
concentrated at its center, an unproven conjecture at that time. His calculations of the
Moon orbit time was within 16% of the known value. By 1680, new values for the
diameter of the Earth improved his orbit time to within 1.6%, but more importantly
[7]: 201
Newton had found a proof of his earlier conjecture.

In 1687 Newton published his Principia which combined his laws of motion with new
[6]: 134
mathematical analysis to explain Kepler's empirical results. His explanation was in
the form of a law of universal gravitation: any two bodies are attracted by a force
[8]: 28
proportional to their mass and inversely proportional to their separation squared.
Newton's original formula was:

Forceofgravity∝massofobject1×massofobject2distancefromcen
ters2

where the symbol


means "is proportional to". To make this into an equal-sided formula or equation,
there needed to be a multiplying factor or constant that would give the correct force of
gravity no matter the value of the masses or distance between them (the gravitational
constant). Newton would need an accurate measure of this constant to prove his
inverse-square law. When Newton presented Book 1 of the unpublished text in April
1686 to the Royal Society, Robert Hooke made a claim that Newton had obtained the
[7]: 204
inverse square law from him, ultimately a frivolous accusation.

Newton's "causes hitherto unknown"[edit]

Main article: Action at a distance


While Newton was able to formulate his law of gravity in his monumental work, he was
deeply uncomfortable with the notion of "action at a distance" that his equations implied.
In 1692, in his third letter to Bentley, he wrote: "That one body may act upon another at
a distance through a vacuum without the mediation of anything else, by and through
which their action and force may be conveyed from one another, is to me so great an
absurdity that, I believe, no man who has in philosophic matters a competent faculty of
thinking could ever fall into it."

He never, in his words, "assigned the cause of this power". In all other cases, he used
the phenomenon of motion to explain the origin of various forces acting on bodies, but
in the case of gravity, he was unable to experimentally identify the motion that produces
the force of gravity (although he invented two mechanical hypotheses in 1675 and
1717). Moreover, he refused to even offer a hypothesis as to the cause of this force on
grounds that to do so was contrary to sound science. He lamented that "philosophers
have hitherto attempted the search of nature in vain" for the source of the gravitational
force, as he was convinced "by many reasons" that there were "causes hitherto
unknown" that were fundamental to all the "phenomena of nature". These fundamental
phenomena are still under investigation and, though hypotheses abound, the definitive
answer has yet to be found. And in Newton's 1713 General Scholium in the second
edition of Principia: "I have not yet been able to discover the cause of these properties
of gravity from phenomena and I feign no hypotheses.... It is enough that gravity does
really exist and acts according to the laws I have explained, and that it abundantly
[9]
serves to account for all the motions of celestial bodies."

Modern form[edit]
In modern language, the law states the following:

Every point mass attracts every single other point mass by a force acting along
the line intersecting both points. The force is proportional to the product of the
two masses and inversely proportional to the square of the distance between
[10]
them:

F=Gm1m2r2
where

● F is the force between the masses;


−11
● G is the Newtonian constant of gravitation (6.674×10
3 −1 −2
m ⋅kg ⋅s );
● m1 is the first mass;
● m2 is the second mass;
● r is the distance between the centers of the masses.

Error plot showing experimental values for G.

Assuming SI units, F is measured in newtons (N), m1 and m2 in kilograms (kg), r in


−11 3 −1 −2 [11]
meters (m), and the constant G is 6.67430(15)×10 m ⋅kg ⋅s . The value of
the constant G was first accurately determined from the results of the Cavendish
experiment conducted by the British scientist Henry Cavendish in 1798, although
[5]
Cavendish did not himself calculate a numerical value for G. This experiment was also
the first test of Newton's theory of gravitation between masses in the laboratory. It took
place 111 years after the publication of Newton's Principia and 71 years after Newton's
death, so none of Newton's calculations could use the value of G; instead he could only
calculate a force relative to another force.
Bodies with spatial extent[edit]

Gravitational field strength within the Earth

Gravity field near the surface of the Earth – an object is shown accelerating toward the surface

If the bodies in question have spatial extent (as opposed to being point masses), then
the gravitational force between them is calculated by summing the contributions of the
notional point masses that constitute the bodies. In the limit, as the component point
masses become "infinitely small", this entails integrating the force (in vector form, see
below) over the extents of the two bodies.

In this way, it can be shown that an object with a spherically symmetric distribution of
mass exerts the same gravitational attraction on external bodies as if all the object's
[10]
mass were concentrated at a point at its center. (This is not generally true for non-
spherically symmetrical bodies.)

For points inside a spherically symmetric distribution of matter, Newton's shell theorem
can be used to find the gravitational force. The theorem tells us how different parts of
the mass distribution affect the gravitational force measured at a point located a
[12]
distance r0 from the center of the mass distribution:

● The portion of the mass that is located at radii r < r0 causes the same force at
the radius r0 as if all of the mass enclosed within a sphere of radius r0 was
concentrated at the center of the mass distribution (as noted above).
● The portion of the mass that is located at radii r > r0 exerts no net gravitational
force at the radius r0 from the center. That is, the individual gravitational
forces exerted on a point at radius r0 by the elements of the mass outside the
radius r0 cancel each other.
As a consequence, for example, within a shell of uniform thickness and density there is
no net gravitational acceleration anywhere within the hollow sphere.

Vector form[edit]

Gravity field surrounding Earth from a macroscopic perspective.

Newton's law of universal gravitation can be written as a vector equation to account for
the direction of the gravitational force as well as its magnitude. In this formula, quantities
in bold represent vectors.

F21=−Gm1m2|r21|2r^21=−Gm1m2|r21|3r21

where

● F21 is the force applied on body 2 exerted by body 1,


● G is the gravitational constant,
● m1 and m2 are respectively the masses of bodies 1 and 2,
● r21 = r2 − r1 is the displacement vector between bodies 1 and 2, and
● r^21 =def r2−r1|r2−r1|

[13]
● is the unit vector from body 1 to body 2.
It can be seen that the vector form of the equation is the same as the scalar form given
earlier, except that F is now a vector quantity, and the right hand side is multiplied by
the appropriate unit vector. Also, it can be seen that F12 = −F21.

Gravity field[edit]
Main article: Gravitational field
The gravitational field is a vector field that describes the gravitational force that would
be applied on an object in any given point in space, per unit mass. It is actually equal to
the gravitational acceleration at that point.

It is a generalisation of the vector form, which becomes particularly useful if more than
two objects are involved (such as a rocket between the Earth and the Moon). For two
objects (e.g. object 2 is a rocket, object 1 the Earth), we simply write r instead of r12 and
m instead of m2 and define the gravitational field g(r) as:

g(r)=−Gm1|r|2r^

so that we can write:

F(r)=mg(r).

This formulation is dependent on the objects causing the field. The field has units of
2
acceleration; in SI, this is m/s .

Gravitational fields are also conservative; that is, the work done by gravity from one
position to another is path-independent. This has the consequence that there exists a
gravitational potential field V(r) such that

g(r)=−∇V(r).
If m1 is a point mass or the mass of a sphere with homogeneous mass distribution, the
force field g(r) outside the sphere is isotropic, i.e., depends only on the distance r from
the center of the sphere. In that case

V(r)=−Gm1r.

As per Gauss's law, field in a symmetric body can be found by the mathematical
equation:

∂V

g(r)⋅dA=−4πGMenc,

where

∂V

is a closed surface and

Menc

is the mass enclosed by the surface.

Hence, for a hollow sphere of radius

R
and total mass

M
,

|g(r)|={0,if r<RGMr2,if r≥R


For a uniform solid sphere of radius

R
and total mass

M
,

|g(r)|={GMrR3,if r<RGMr2,if r≥R

Limitations[edit]
Newton's description of gravity is sufficiently accurate for many practical purposes and
is therefore widely used. Deviations from it are small when the dimensionless quantities

ϕ/c2

and

(v/c)2

are both much less than one, where

is the gravitational potential,

v
is the velocity of the objects being studied, and
c
[14]
is the speed of light in vacuum. For example, Newtonian gravity provides an
accurate description of the Earth/Sun system, since

ϕc2=GMsunrorbitc2∼10−8,(vEarthc)2=(2πrorbit(1
yr)c)2∼10−8,

where

rorbit
is the radius of the Earth's orbit around the Sun.

In situations where either dimensionless parameter is large, then general relativity must
be used to describe the system. General relativity reduces to Newtonian gravity in the
limit of small potential and low velocities, so Newton's law of gravitation is often said to
be the low-gravity limit of general relativity.

Observations conflicting with Newton's formula[edit]


● Newton's theory does not fully explain the precession of the perihelion of the
orbits of the planets, especially that of Mercury, which was detected long after
[15]
the life of Newton. There is a 43 arcsecond per century discrepancy
between the Newtonian calculation, which arises only from the gravitational
attractions from the other planets, and the observed precession, made with
advanced telescopes during the 19th century.
● The predicted angular deflection of light rays by gravity (treated as particles
travelling at the expected speed) that is calculated by using Newton's theory
[citation
is only one-half of the deflection that is observed by astronomers.
needed]
Calculations using general relativity are in much closer agreement with
the astronomical observations.
● In spiral galaxies, the orbiting of stars around their centers seems to strongly
disobey both Newton's law of universal gravitation and general relativity.
Astrophysicists, however, explain this marked phenomenon by assuming the
presence of large amounts of dark matter.
Einstein's solution[edit]

Part of a series on
Spacetime

● Special relativity
● General relativity

show

Spacetime concepts

show

General relativity

show

Classical gravity

show

Relevant mathematics

● Physics portal
● Category

● V
● T
● E
The first two conflicts with observations above were explained by Einstein's theory of
general relativity, in which gravitation is a manifestation of curved spacetime instead of
being due to a force propagated between bodies. In Einstein's theory, energy and
momentum distort spacetime in their vicinity, and other particles move in trajectories
determined by the geometry of spacetime. This allowed a description of the motions of
light and mass that was consistent with all available observations. In general relativity,
the gravitational force is a fictitious force resulting from the curvature of spacetime,
because the gravitational acceleration of a body in free fall is due to its world line being
a geodesic of spacetime.

Extensions
vious section contains all the information needed to define general relativity, describe its
key properties, and address a question of crucial importance in physics, namely how the
theory can be used for model-building.

Definition and basic properties[edit]

General relativity is a metric theory of gravitation. At its core are Einstein's equations,
which describe the relation between the geometry of a four-dimensional pseudo-
Riemannian manifold representing spacetime, and the energy–momentum contained in
[46]
that spacetime. Phenomena that in classical mechanics are ascribed to the action of
the force of gravity (such as free-fall, orbital motion, and spacecraft trajectories),
correspond to inertial motion within a curved geometry of spacetime in general relativity;
there is no gravitational force deflecting objects from their natural, straight paths.
Instead, gravity corresponds to changes in the properties of space and time, which in
[47]
turn changes the straightest-possible paths that objects will naturally follow. The
curvature is, in turn, caused by the energy–momentum of matter. Paraphrasing the
relativist John Archibald Wheeler, spacetime tells matter how to move; matter tells
[48]
spacetime how to curve.

While general relativity replaces the scalar gravit

(x)=limx→cf′(x)g′(x).

The differentiation of the numerator and denominator often simplifies the quotient or
converts it to a limit that can be directly evaluated.

History[edit]
[a]
Guillaume de l'Hôpital (also written l'Hospital ) published this rule in his 1696 book
Analyse des Infiniment Petits pour l'Intelligence des Lignes Courbes (literal translation:
Analysis of the Infinitely Small for the Understanding of Curved Lines), the first textbook
[1][b]
on differential calculus. However, it is believed that the rule was discovered by the
[3]
Swiss mathematician Johann Bernoulli.

General form[edit]
The general form of L'Hôpital's rule covers many cases. Let c and L be extended real
numbers (i.e., real numbers, positive infinity, or negative infinity). Let I be an open
interval containing c (for a two-sided limit) or an open interval with endpoint c (for a one-
sided limit, or a limit at infinity if c is infinite). The real valued functions f and g are
assumed to be differentiable on I except possibly at c, and additionally

g′(x)≠0

on I except possibly at c. It is also assumed that

limx→cf′(x)g′(x)=L.

Thus, the rule applies to situations in which the ratio of the derivatives
has a finite or infinite limit, but not to situations in which that ratio fluctuates permanently
as x gets closer and closer to c.

If either

limx→cf(x)=limx→cg(x)=0

or

limx→c|f(x)|=limx→c|g(x)|=∞,

then
limx→cf(x)g(x)=L.

1. ; and
2. Existence of limit of the quotient of the derivatives:
3. limx→cf′(x)g′(x)

4. exists.

Where one of the above conditions≠0

and

limx→1g(x)=limx→1(2x+1)=2(1)+1=3≠0

. This means that the form is not


indeterminate.

The second and third conditions are satisfied by

f(x)

and

g(x)

. The fourth condition is also satisfied with

limx→1f′(x)g′(x)=limx→1(x+1)′(2x+1)′=limx→112=12

But, L'Hôpital's rule fails in this counterexample, since

limx→1f(x)g(x)=limx→1x+12x+1=limx→1(x+1)limx→1(2x+1)=23
≠12=limx→1f′(x)g′(x)
.

Differentiability of functions[edit]

Differentiability of functions is a requirement because if a function is not differentiable,


then the derivative of the functions is not guaranteed to exist at each point in

. The fact that

is an open interval is grandfathered in from the hypothesis of the Cauchy's mean


value theorem. The notable exception of the possibility of the functions being not
differentiable at

exists because L'Hôpital's rule only requires the derivative to exist as the function
approaches

; the derivative does not need to be taken at

For example, let

f(x)={sin⁡x,x≠01,x=0

,
g(x)=x

, and

c=0

. In this case,

f(x)

is not differentiable at

. However, since

f(x)

is differentiable everywhere except

, then

limx→cf′(x)

still exists. Thus, since

limx→cf(x)g(x)=00

and

limx→cf′(x)g′(x)
exists, L'Hôpital's rule still holds.

Derivative of denominator is zero[edit]

The necessity of the condition that

g′(x)≠0

near

[6]
can be seen by the following counterexample due to Otto Stolz. Let

f(x)=x+sin⁡xcos⁡x

and

g(x)=f(x)esin⁡x.

Then there is no limit for

f(x)/g(x)

as

x→∞.

However,

f′(x)g′(x)=2cos2⁡x(2cos2⁡x)esin⁡x+
(x+sin⁡xcos⁡x)esin⁡xcos⁡x=2cos⁡x2cos⁡x+x+sin⁡xcos⁡xe−sin⁡x,
which tends to 0 as

x→∞

[7]
. Further examples of this type were found by Ralph P. Boas Jr.

Limit of derivatives does not exist[edit]

The requirement that the limit

limx→cf′(x)g′(x)

exists is essential. Without this condition,

f′

or

g′

may exhibit undamped oscillations as

approaches

, in which case L'Hôpital's rule does not apply. For example, if


f(x)=x+sin⁡(x)

g(x)=x

and

c=±∞

, then

f′(x)g′(x)=1+cos⁡(x)1;

this expression does not approach a limit as

goes to

, since the cosine function oscillates between 1 and −1. But working with the original
functions,

limx→∞f(x)g(x)

can be shown to exist:

limx→∞f(x)g(x)=limx→∞(x+sin⁡(x)x)=limx→∞(1+sin⁡(x)x)=1+lim
x→∞(sin⁡(x)x)=1+0=1.
In a case such as this, all that can be concluded is that

lim infx→cf′(x)g′(x)≤lim infx→cf(x)g(x)≤lim supx→cf(x)g(x)≤lim


supx→cf′(x)g′(x),

so that if the limit of

fg

f/g exists, then it must lie between the inferior and superior limits of

f′g′

. (In the example above, this is true, since 1 indeed lies between 0 and 2.)

Examples[edit]
● Here is a basic example involving the exponential function, which involves the
indeterminate form
● 0
● /
● 0
● at x = 0:
● limx→0ex−1x2+x=limx→0ddx(ex−1)ddx(x2+x)=limx→0ex2x+1
=1.


● This is a more elaborate example involving
● 0
● /
● 0
● . Applying L'Hôpital's rule a single time still results in an indeterminate form.
In this case, the limit may be evaluated by applying the rule three times:
● limx→02sin⁡(x)−sin⁡(2x)x−sin⁡(x)=limx→02cos⁡(x)
−2cos⁡(2x)1−cos⁡(x)=limx→0−2sin⁡(x)
+4sin⁡(2x)sin⁡(x)=limx→0−2cos⁡(x)+8cos⁡(2x)cos⁡(x)=−2+81=6.


● Here is an example involving
● ∞
● /
● ∞
● :
● limx→∞xn⋅e−x=limx→∞xnex=limx→∞nxn−1ex=n⋅limx→∞xn−1e
x.

● Repeatedly
apply L'Hôpital's rule until the exponent is zero (if n is an integer) or negative
(if n is fractional) to conclude that the limit is zero.
● Here is an example involving the indeterminate form 0 · ∞ (see below),
which is rewritten as the form
● ∞
● /
● ∞
● :
● limx→0+xln⁡x=limx→0+ln⁡x1x=limx→0+1x−1x2=limx→0+−x=0.


● Here is an example involving the mortgage repayment formula and
● 0
● /
● 0
● . Let P be the principal (loan amount), r the interest rate per period and n the
number of periods. When r is zero, the repayment amount per period is
● Pn

● (since only principal is being repaid); this is consistent with the formula for
non-zero interest rates:
● limr→0Pr(1+r)n(1+r)n−1=Plimr→0(1+r)n+rn(1+r)n−1n(1+r)n−1
=Pn.


● One can also use L'Hôpital's rule to prove the following theorem. If f is twice-
differentiable in a neighborhood of x and its second derivative is continuous
on this neighborhood, then
● limh→0f(x+h)+f(x−h)−2f(x)h2=limh→0f′(x+h)−f′
(x−h)2h=limh→0f″(x+h)+f″(x−h)2=f″(x).


● Sometimes L'Hôpital's rule is invoked in a tricky way: suppose
● f(x)+f′(x)
● converges as x → ∞ and that
● ex⋅f(x)
● converges to positive or negative infinity. Then:
● limx→∞f(x)=limx→∞ex⋅f(x)ex=limx→∞ex(f(x)+f′
(x))ex=limx→∞(f(x)+f′(x))

and so,
● limx→∞f(x)
● exists and
● limx→∞f′(x)=0.

The result remains true without the added hypothesis that
● ex⋅f(x)
● converges to positive or negative infinity, but the justification is then
incomplete.

Complications[edit]
Sometimes L'Hôpital's rule does not lead to an answer in a finite number of steps unless
some additional steps are applied. Examples include the following:

● Two applications can lead to a return to the original expression that was to be
evaluated:
● limx→∞ex+e−xex−e−x=limx→∞ex−e−xex+e−x=limx→∞ex+e
−xex−e−x=⋯.

● This situation
can be dealt with by substituting
● y=ex
● and noting that y goes to infinity as x goes to infinity; with this
substitution, this problem can be solved with a single application of the rule:
● limx→∞ex+e−xex−e−x=limy→∞y+y−1y−y−1=limy→∞1−y−21
+y−2=11=1.

● Alternatively,
the numerator and denominator can both be multiplied by
● ex,
[8]
● at which point L'Hôpital's rule can immediately be applied successfully:
● limx→∞ex+e−xex−e−x=limx→∞e2x+1e2x−1=limx→∞2e2x2e2x
=1.


● An arbitrarily large number of applications may never lead to an answer even
without repeating:
● limx→∞x12+x−12x12−x−12=limx→∞12x−12−12x−3212x−12+
12x−32=limx→∞−14x−32+34x−52−14x−32−34x−52=⋯.

This situation too can be dealt with by a transformation of variables, in this


case
● y=x
● :
● limx→∞x12+x−12x12−x−12=limy→∞y+y−1y−y−1=limy→∞1−
y−21+y−2=11=1.

● Again, an
alternative approach is to multiply numerator and denominator by
● x1/2
● before applying L'Hôpital's rule:
● limx→∞x12+x−12x12−x−12=limx→∞x+1x−1=limx→∞11=1.

A common pitfall is using L'Hôpital's rule with some circular reasoning to compute a
derivative via a difference quotient. For example, consider the task of proving the
derivative formula for powers of x:

limh→0(x+h)n−xnh=nxn−1.

Applying L'Hôpital's rule and finding the derivatives with respect to h of the numerator
n−1
and the denominator yields nx as expected. However, differentiating the numerator
requires the use of the very fact that is being proven. This is an example of begging the
question, since one may not assume the fact to be proven during the course of the
proof.

A similar pitfall occurs in the calculation of

limx→0sin⁡(x)x=1.
Proving that differentiating

sin⁡(x)

gives

cos⁡(x)

involves calculating the difference quotient

limh→0sin⁡(h)h

in the first place, so a different method such as squeeze theorem must be


used instead.

Other indeterminate forms[edit]


∞ 0 0
Other indeterminate forms, such as 1 , 0 , ∞ , 0 · ∞, and ∞ − ∞, can sometimes
be evaluated using L'Hôpital's rule. For example, to evaluate a limit involving ∞ − ∞,
convert the difference of two functions to a quotient:

limx→1(xx−1−1ln⁡x)=limx→1x⋅ln⁡x−x+1(x−1)⋅ln⁡x(1)=limx→1ln⁡x
x−1x+ln⁡x(2)=limx→1x⋅ln⁡xx−1+x⋅ln⁡x(3)=limx→11+ln⁡x1+1+ln⁡x
(4)=limx→11+ln⁡x2+ln⁡x=12,

Categories:

Quantum models
Scattering theory
Schrödinger equation
Quantum mechanical potentials
This page was last edited on 22 April 2024, at 15:36 (UTC).

Text is available under the Creative Commons Attribution-ShareAlike License 4.0; additional terms may apply.
By using this site, you agre
A breakthrough in FET research came with the work of Egyptian engineer Mohamed
[3]
Atalla in the late 1950s. In 1958 he presented experimental work which showed that
growing thin silicon oxide on clean silicon surface leads to neutralization of surface
states. This is known as surface passivation, a method that became critical to the
semiconductor industry as it made mass-production of silicon integrated circuits
[19][20]
possible.

The metal–oxide–semiconductor field-effect transistor (MOSFET) was then invented by


[21][22]
Mohamed Atalla and Dawon Kahng in 1959. The MOSFET largely superseded
[2]
both the bipolar transistor and the JFET, and had a profound effect on digital
[23][22] [24]
electronic development. With its high scalability, and much lower power
[25]
consumption and higher density than bipolar junction transistors, the MOSFET made
[26]
it possible to build high-density integrated circuits. The MOSFET is also capable of
[27]
handling higher power than the JFET. The MOSFET was the first truly compact
[6]
transistor that could be miniaturised and mass-produced for a wide range of uses. The
[20]
MOSFET thus became the most common type of transistor in computers, electronics,
[28]
and communications technology (such as smartphones). The US Patent and
Trademark Office calls it a "groundbreaking invention that transformed life and culture
[28]
around the world".

CMOS (complementary MOS), a semiconductor device fabrication process for


MOSFETs, was developed by Chih-Tang Sah and Frank Wanlass at Fairchild
[29][30]
Semiconductor in 1963. The first report of a floating-gate MOSFET was made by
[31]
Dawon Kahng and Simon Sze in 1967. A double-gate MOSFET was first
demonstrated in 1984 by Electrotechnical Laboratory researchers Toshihiro Sekigawa
[32][33]
and Yutaka Hayashi. FinFET (fin field-effect transistor), a type of 3D non-planar
multi-gate MOSFET, originated from the research of Digh Hisamoto and his team at
[34][35]
Hitachi Central Research Laboratory in 1989.

Basic information

You might also like