0% found this document useful (0 votes)
53 views6 pages

Occam's Quantum Razor: How Quantum Mechanics Can Reduce The Complexity of Classical Models

The document discusses how quantum mechanics can be used to construct simpler models than classical models for stochastic processes. It shows that for almost all stochastic processes, the simplest classical models still require more input information than the amount needed to predict the future output. The document then introduces a framework for constructing quantum models of stochastic processes and shows that these quantum models can break this classical bound by requiring less input information, demonstrating their advantage over classical models according to Occam's razor.

Uploaded by

David Zueco
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
53 views6 pages

Occam's Quantum Razor: How Quantum Mechanics Can Reduce The Complexity of Classical Models

The document discusses how quantum mechanics can be used to construct simpler models than classical models for stochastic processes. It shows that for almost all stochastic processes, the simplest classical models still require more input information than the amount needed to predict the future output. The document then introduces a framework for constructing quantum models of stochastic processes and shows that these quantum models can break this classical bound by requiring less input information, demonstrating their advantage over classical models according to Occam's razor.

Uploaded by

David Zueco
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

Occams Quantum Razor: How Quantum Mechanics can reduce the complexity of classical models

Mile Gu,1 Karoline Wiesner,2 Elisabeth Rieper,1 and Vlatko Vedral3, 4


1

Center for Quantum Technology, National University of Singapore, Republic of Singapore 2 School of Mathematics, Centre for Complexity Sciences, University of Bristol, Bristol BS8 1TW, United Kingdom 3 Atomic and Laser Physics, Clarendon Laboratory, University of Oxford, Parks Road, Oxford OX13PU, United Kingdom 4 Department of Physics, National University of Singapore, Republic of Singapore (Dated: April 3, 2012)

arXiv:1102.1994v5 [quant-ph] 2 Apr 2012

Mathematical models are an essential component of quantitative science. They generate predictions about the future, based on information available in the present. In the spirit of Occams razor, simpler is better; should two models make identical predictions, the one that requires less input is preferred. Yet, for almost all stochastic processes, even the provably optimal classical models waste information. The amount of input information they demand exceeds the amount of predictive information they output. We systematically construct quantum models that break this classical bound, and show that the system of minimal entropy that simulates such processes must necessarily feature quantum dynamics. This indicates that many observed phenomena could be signicantly simpler than classically possible should quantum eects be involved.
PACS numbers: 02.50.-r, 89.70.-a, 03.67.-a, 02.50.Ey, 03.67.Ac

INTRODUCTION

Occams razor, the principle that plurality is not to be posited without necessity, is an important heuristic that guides the development of theoretical models in quantitative science. In the words of Isaac Newton,We are to admit no more causes of natural things than such as are both true and sucient to explain their appearances. Take for example application of Newtons laws on an apple in free fall. The future trajectory of the apple is entirely determined by a second order dierential equation, that requires only its current location and velocity as input. We can certainly construct alternative models that predict identical behavior, that demand the apple color, or its entire past trajectory as input. Such theories, however, are dismissed by Occams razor, since they demand input information that is either unnecessary or redundant. Generally, a mathematical model of a system of interest is an algorithmic abstraction of its observable output. Envision that the given system is encased within a black box, such that we observe only its output. Within a second box resides a computer that executes a model of this system with appropriate input. For the model to be accurate, we expect these boxes to be operationally indistinguishable; their output is statistically equivalent, such that no external observer can dierentiate which box contains the original system. There are numerous distinct models for any given system. Consider a system of interest consisting of two binary switches. At each time-step, the system emits a 0 or 1 depending on whether the state of the two switches coincides, and one of the two switches is chosen at ran-

dom and ipped. The obvious model that simulates this system keeps track of both switches, and thus requires an input of entropy 2. Yet, the output is simply a sequence of alternating 0s and 1s, and can thus be modeled knowing only the value of the previous emission. Occams razor stipulates that this alternative is more ecient and thus superior; it demands only an input of entropy 1 (i.e., a single bit), when the original model required two. This motivates a direct interpretation of Occams razor; the optimal model of a particular behavior is the one whose input is of minimal entropy. Indeed, this interpretation has been already adopted as a principle of computational mechanics [1, 2]. Ecient mathematical models carry operational consequence. The practical application of a model necessitates its physical realization within a corresponding simulator (Fig. 1). Therefore, should a model demand an input of entropy C, its physical realization must contain the capacity to store that information. The construction of simpler mathematical models for a given process allows potential construction of simulators with reduced information storage requirements. Thus we can directly infer the minimal complexity of an observed process once we know its simplest model. If a process exhibits observed statistics that require an input of entropy C to model, then whatever the underlying mechanics of the observed process, we require a system of entropy C to simulate its future statistics. These observations motivate maximally ecient models; models that generate desired statistical behavior, while requiring minimal input information. In this article, we show that even when such behavior aligns with simple stochastic processes, such models are almost al-

FIG. 1: The Relationship between models and simulators. A mathematical model is dened by a stochastic function f that maps relevant data from the present,x, to desired output statistics that coincides with the process it seeks to model. To implement this model, we must realize it within some physical simulator. To do this, we (a) encode x within a suitable physical system, (b) evolve the system according to a physical implementation of f and (c) retrieve the predictions of model by appropriate measurement. On the other hand, given a simulator with entropy C that outputs statistically identical predictions, we can always construct a corresponding mathematical model that takes the initial state of this system as input. Thus the input entropy of a model and the initial entropy of its corresponding simulator coincide (this is also a lower bound on the amount of information the simulator must store). In this article, we regard both models and simulators as algorithms that map input states to desired output statistics, with implicit understanding that the two terms are interchangeable. The former emphasizes the mathematical nature of these algorithms, while the latter their physical realization.

ways quantum. For any given stochastic process, we outline its provably simplest classical model, We show that unless improvement over this optimal classical model violates the second law of thermodynamics, our construction and a superior quantum model and its corresponding simulator can always be constructed.

tems [3]). On the other hand, there appears no obvious reason a model should require anything more. We say that the resulting model, where C = E = I( X : X ), is ideal. It turns out that for many systems such models do not exist. Consider a dynamical system observed at discrete times t Z, with possible discrete outcomes xt dictated by random variables Xt . Such a system can be modeled by a stochastic process[4], where each realization is specied by a sequence of past outcomes = . . . x x x , and exhibits a particular future x 3 2 1 = x x x . . . with probability P ( = | = ). X x X x x 0 1 2 Here, E = I( X : X ), referred to as excess entropy[5, 6], is a quantity of relevance in diverse disciplines ranging from spin systems [7] to measures of brain complexity[8]. How can we construct the simplest simulator of such behavior, preferably with input entropy of no more than E? The brute force approach is to create an algorithm that samples from P ( X | X = ) given complete knowledge x . Such a construction accepts directly as input, of x x resulting in the required entropy of C = H( X ), where H( X ) denotes the Shannon entropy of the complete past. This is wasteful. Consider the output statistics resulting from a sequence of coin ips, such that P ( X , X ) is the uniform distribution over all binary strings. E equals

RESULTS

Framework and tools. We can characterize the observable behavior of any dynamical process by a joint probability distribution P ( X , X ), where X and X are random variables that govern the systems observed behavior respectively, in the past and the future. Each particular realization of the process has a particular past , with probability P ( = ). Should there exists a x X x model for this behavior with an input of entropy C, then we may compress within a system S of entropy C, such x that systematic actions on S generates random variables whose statistics obey P ( X | X = ). x We seek the maximally ecient model, such that C is minimized. Since the past contains exactly E = I( X : X ) (the mutual information between past and future) about the future, the model must require an input of entropy at least E (this remains true for quantum sys-

3 0 and yet C is innite. It should not require innite memory to mimic a single coin, better approaches exist. Simplest classical models. -machines are the provably optimal classical solution[9, 10]. They rest on the rationale that to exhibit desired future statistics, a system needs not distinguish diering pasts, and , if their x x future statistics coincide. This motivates the equivalence relation, , on the set of all past output histories, such that i P ( X |) = P ( X | ). To sample from x x x x , a -machine need not store P ( X | x ) for a particular x , only which equivalence class, () { : }, x x x x x belongs to. Each equivalence classes is referred to as x a causal state. For any stochastic process P ( X , X ) with emission alphabet , we may deduce its causal states {Si }N that i=1 form the state space of its corresponding -machine. At each time step t, the machine operates according to a (r) set of transition probabilities Tj,k ; the probability that the machine will output xt = r , and transition to Sk given that it is in state Sj . The resulting -machine, when initially set to state (), generates a sequence x according to probability distribution P (| = ) x X X x as it iterates through these transitions. The resulting -machine thus has internal entropy C = H(S) =
jS

pj log pj C

(1)

where S is the random variable that governs Sj = () x ) = S . and pj is the probability that ( x j The provable optimality of -machines among all classical models motivates C as an intrinsic property of a given stochastic process, rather than just a property of -machines. Referred to in literature as the statistical complexity [10, 11], its interpretation as the minimal amount of information storage required to simulate such a given process has been applied to quantify selforganization [12], the onset of chaos [9] and complexity of protein conguration space [13]. Such interpretations, however, implicitly assume that classical models are optimal. Should a quantum simulator be capable of exhibiting the same output statistics with reduced entropy, this fundamental interpretation of C may require review. Classical models are not ideal. There is certainly room for improvement. For many stochastic processes, C is strictly greater than E [11]; the -machine that models such processes is fundamentally irreversible. Even if the entire future output of such an -machine was observed, we would still remain uncertain which causal state the machine was initialized in. Some of that information has been erased, and thus, in principle, need never be stored. In this paper we show that for all such processes, quantum processing helps; for any -machine such that C > E, there exists a quantum system, a quantum -machine with entropy Cq , such that C > Cq E. Therefore, the corresponding model demands an input

with entropy no greater than Cq . The key intuition for our construction lies in identifying the cause of irreversibility within classical -machines, and addressing it within quantum dynamics. An machine distinguishes two dierent causal states provided they have diering future statistics, but makes no distinction based on how much these futures dier. Consider two causal states, Sj or Sk , that both have potential to emit output r at the next time-step and transition to some coinciding causal state Sl . Should this occur, some of the information required to completely distinguish Sj and Sk has been irreversibly lost. We say that Sj and Sk share non-distinct futures. In fact, this is both necessary and sucient condition for C > E (See methods for proof). The irreversibility condition. Given a stochastic process P ( X , X ) with excess entropy E and statistical complexity C . Let its corresponding -machine have (r) transition probabilities Tj,k . Then C > E i there exists a non-zero probability that two dierent causal states, Sj and Sk will both make a transition to a coinciding causal state Sl upon emission of a coinciding output r , (r) (r) i.e., Tj,l , Tk,l = 0. We refer to this as the irreversibility condition. This condition highlights the fundamental limitation of any classical model. In order to generate desired statistics, any classical model must record each binary prop erty A such that P ( X |A = 0) = P ( X |A = 1), regardless of how much these distributions overlap. In contrast, quantum models are free of such restriction. A quantum system can store causal states as quantum states that are not mutually orthogonal. The resulting quantum -machine dierentiates causal states suciently to generate correct statistical behavior. Essentially, they save memory by partially discarding A, and yet retain enough information to recover statistical dierences be tween P ( X |A = 0) and P ( X |A = 1). Improved quantum models Given an -machine (r) with causal states Sj and transition probabilities Tj,k , we dene quantum causal states
N

|Sj =
k=1 r

Tjk |r |k ,

(r)

(2)

where |r and |k form orthogonal bases on Hilbert spaces of size || and |S| respectively. A quantum -machine accepts a quantum state |Sj as input in place of Sj . Thus, such a system has an internal entropy of Cq = Tr log , (3)

where = j pj |Sj Sj |. Cq is clearly strictly less than C provided not all |Sj are mutually orthogonal [14]. This is guaranteed whenever C > E. The irreversibility condition implies that there exists two causal states, Sj and Sk , which will both make a transition to

4 a coinciding causal state Sl upon emission of a coincid(r) (r) ing output r , i.e., Tj,l , Tk,l = 0. Consequently Sj |Sk
r r Tj,l Tk,l > 0 i Tj,l , Tk,l = 0, and thus |Sj (r) (r)

ENTROPY

is not orthogonal with respect to Sj |. A quantum -machine initialized in state |Sj can synthesis black-box behavior which is statistically identical to a classical -machine initialized in state Sj . A simple method is to (i) measure |Sj in the basis |r |k , resulting in measurement values r, k. (ii) Set r as output x0 and prepare the quantum state |Sk . Repetition of this process generates a sequence of outputs x1 , x2 , . . . according to the same probability distribution as the original machine and hence P ( X |). (We note that while the x simplicity of the above method makes it easy to understand and amiable to experimental realization, theres room for improvement. The decoding process prepares Sk based of the value of k, and thus still requires C bits of memory. However, there exist more sophisticated protocols without such limitation, such that the entropy of the quantum -machine remains at Cq at all times. One is detailed in methods). These observations lead to the central result of our paper. Theorem: Consider any stochastic process P ( X , X ) with excess entropy E, whose optimal classical model has input entropy C > E. Then we may construct a quantum system that generates identical statistics, with input entropy Cq < C . In addition, the entropy of this system never exceeds Cq while generating these statistics. There always exists quantum models of greater eciency than the optimal classical model, unless the optimal classical model is already ideal. A concrete example of simulating perturbed coins. We briey highlight these ideas with a concrete example of a perturbed coin. Consider a process P ( X , X ) realized by a box that contains a single coin. At each time step, the box is perturbed such that the coin ips with probability 0 < p < 1, and the state of the coin is then observed. This results in a stochastic process, where each xt {0, 1}, governed by random variable Xt , represents the result of the observation at time t. For any p = 0.5, this system has two causal states, corresponding to the two possible states of the coin; the set of pasts ending in 0, and the set of pasts ending in 1. We call these S0 and S1 . The perturbed coin is its own best classical model, requiring exactly a system of entropy C = 1, namely the coin itself, to generate correct future statistics. As p 0.5, the future statistics of S0 and S1 become increasingly similar. The stronger the perturbation, the less it matters what state the coin was in prior to perturbation. This is reected by the observation that E 0 (in fact E = 1 Hs (p) [7], where Hs (p) = p log p (1 p) log(1 p) is the Shannon entropy of a biased coin that outputs head with probability

FIG. 2: Complexity of the Perturbed Coin Simulation. While the excess entropy of the perturbed coin approaches zero as p 0.5 (red line), generating such statistics classically generally requires an entropy of C = 1 (green line). Encoding the past within a quantum system leads to signicant improvement (purple line). (Here, Cq = + log + log , where = 0.5 p(1 p).) Note, however, that even the quantum protocol still requires an input entropy greater than the excess entropy.

p [15]). Thus only E/C = 1 Hs (p) of the information stored is useful, which tends to 0 as p 0.5. Quantum -machines oer dramatic improvement. We encode the quantum causal states |S0 = 1 p|0 + p|1 or |S1 = p|0 + 1 p|1 within a qubit, which 1 results in entropy Cq = Tr ln , where = 2 (|S0 S0 |+ |S1 S1 |). The non-orthogonality of |S0 and |S1 ensures that this will always be less than C [16]. As p 0.5, a quantum -machines tends to require negligible amount of memory to generate the same statistics compared to its classical counterpart (Fig. 2). This improvement is readily apparent when we model a lattice of K independent perturbed coins, which output a K number x Z2 that represents state of the lattice after each perturbation. Any classical model must necessarily dierentiate between 2K equally likely causal states, and thus require an input of entropy K. A quantum machine reduces this to KCq . For p > 0.2, Cq < 0.5, the initial condition of two perturbed coins may be encoded within a system of entropy 1. For p > 0.4, Cq < 0.1; a system of coinciding entropy can simulate 10 such coins. This indicates that quantum systems can potentially simulate N such coins upon receipt of K N qubits, provided appropriate compression (through lossless encodings [17]) of the relevant past.

DISCUSSION

In this article, we have demonstrated that any stochastic process with no reversible classical model can be further simplied by quantum processing. Such stochas-

5 tic processes are almost ubiquitous. Even the statistics of perturbed coins can be simulated by a quantum system of reduced entropy. In addition, the quantum reconstruction can be remarkably simple. Quantum operations on a single qubit, for example, allows construction of a quantum epsilon machine that simulates such perturbed coins. This allows potential for experimental validation with present day technology. This result has signicant implications. Stochastic processes play an ubiquitous role in the modeling of dynamical systems that permeate quantitative science, from climate uctuations to chemical reaction processes. Classically, the statistical complexity C is employed as a measure of how much structure a given process exhibits. The rationale is that the optimal simulator of such a process requires at least this much memory. The fact that this memory can be reduced quantum mechanically implies the counterintuitive conclusion that quantizing such simulators can reduce their complexity beyond this classical bound, even if the process theyre simulating is purely classical. Many organisms and devices operate based on the ability to predict and thus react to the environment around them. The possibility of exploiting quantum dynamics to make identical predictions with less memory implies that such systems need not be as complex as one originally thought. This leads to the open question, is it always possible to nd an ideal simulator? Certainly, Fig. 2 shows that our construction, while superior to any classical alternative, is still not wholly reversible. While this irreversibility may indicate that more ecient quantum models exist, it is also possible that ideal models remain forbidden within quantum theory. Both cases are interesting. The former would indicate that the notion of stochastic processes hiding information from the present [11] is merely a construct of inecient classical probabilistic models, while the latter hints at a source of temporal asymmetry within the framework of quantum mechanics; that it is fundamentally impossible to simulate certain observable statistics reversibly.
by E . Similarly, we say an ordered pair (Sk S, r ) is (r) a valid reception conguration i Tj,k = 0 for some Sj S, and denote the set of all valid reception congurations by R . We dene the transition function f : E R . Such that f (Sj , r) = (Sk , r) if the -machine set to state Sj will transition to state Sk upon emission of r. We also introduce the shorthand Xb to denote the the list of random variables a Xa , Xa+1 , . . . , Xb . We rst prove the following observations. 1. f is one-to-one i there exist no distinct causal states, (r) (r) Sj and Sk , such that Tj,l , Tk,l = 0 for some Sl . Proof: Suppose f is one-to-one, then f (Sj , r) = f (Sk , r) i Sj = Sk . Thus, there does not exist two dis(r) (r) tinct causal states, Sj and Sk such that Tj,l , Tk,l = 0 for some Sl . Conversely, if f is not one-to-one, so that f (Sj , r) = f (Sk , r) for some Sj = Sk . Let Sl be the (r) (r) state such that f (Sj , r) = (Sl , r), then Tj,l , Tk,l = 0. 2. H(St1 |Xt St ) = 0 i f is one-to-one. Proof: Suppose f is one-to-one. Then for each (Sj , r) R , there exists a unique (Sk , r) such that f (Sk , r) = (Sj , r). Thus, given St = Sj and Xt = r, we may uniquely deduce Sk . Therefore H(St1 |Xt St ) = 0. Conversely, should H(St1 |Xt St ) = 0, then H(St1 Xt |Xt St ) = 0, and thus f is one-to-one. 3. H(St1 |Xt St ) = 0 implies H(St1 |Xt ) = H(St |Xt ). 0 0 Proof: Note that (i) H(St1 |Xt St ) = H(St |Xt St1 )+ 0 0 H(Xt St1 ) H(Xt St ) and (ii) that, since the output 0 0 of f is unique for a given (r, S) E , H(St |Xt St1 ) = 0. (ii) implies that H(St |Xt St1 ) = 0 since un0 certainty can only decrease with additional knowledge and is bounded below by 0. Substituting this into (i) results in the relation H(St1 |Xt St ) = H(Xt St1 ) H(Xt St ).Thus H(St1 |Xt St ) = 0 im0 0 plies H(St1 |Xt ) = H(St |Xt ). 0 0 4. H(St1 |Xt ) = H(St |Xt ) implies C = E. 0 0 Proof: The result follows then from two known properties of -machines, (i) limt H(St |Xt ) = 0 and 0 (ii) C E = H(S1 |X ) [10]. Now assume that 0 H(St1 |Xt ) = H(St |Xt ), recursive substitutions im0 0 ply that H(S1 |Xt ) = H(St |Xt ). In the limit where 0 0 t , the above equality implies C E = 0. 5. C = E implies H(St1 |Xt St ) = 0. Proof: Since (i) C = E = H(S1 |X ) 0 H(S1 |X S0 ), and (ii) H(St1 |Xt St ) = 0 H(S1 |X0 S0 ), it suces to show that H(S1 |X S0 ) = 0 H(S1 |X0 S0 ). Now H(S1 |X S0 ) = H(X S1 S0 ) H(X S0 ) = 0 0 0 H(X |S1 X0 S0 ) + H(X0 S1 S0 ) H(X |X0 S0 ) 1 1 H(X0 S0 ). But, by the Markov property of causal states, H(X |S1 X0 S0 ) = H(X |X0 S0 ), 1 1 thus H(S1 |X S0 ) = H(X0 S1 S0 ) H(X0 S0 ) = 0 H(S1 |X0 S0 ), as required. Combining (1), (2), (3) and (4), we see that there exists a non-zero probability that two distinct causal states, Sj and (r) (r) Sk such that Tj,l , Tk,l = 0 for some Sl only if C = E. Meanwhile (1), (2), and (5) imply that there exists no two (r) (r) distinct causal states, Sj and Sk such that Tj,l , Tk,l = 0 for some Sl only if C = E. Theorem 1 follows.

METHODS: Proof of Theorem 1. Let the aforementioned -machine have causal states S = {Si }N and emission alphabet . Con1 sider an instance of the -machine at a particular time-step t. Let St and Xt be the random variables that respectively governs its causal state and observed output at time t, such that the transition probabilities that dene the -machine can be expressed as Tj,k = P (St = Sk , Xt = r|St1 = Sj ).
(r)

(4)

We say an ordered pair (Sj S, r ) is a valid emission (r) conguration i Tj,k = 0 for some Sk S. That is, it is possible for an -machine in state Sj to emit r and transit to some Sk . Denote the set of all valid emission congurations

6
can thus execute correctly even if all outputs remained unmeasured, and thus are truly ignorant of which causal state theyre in!). Thus, the physical application of the above protocol generates correct predication statistics without requiring more than memory Cq . Acknowledgments M.G. would like to thank C. Weedbrook, H. Wiseman, M. Hayashi, W. Son and K. Modi for helpful discussions. M.G. and E.R. are supported by the National Research Foundation and Ministry of Education, in Singapore. K.W. is funded through EPSRC grant EP/E501214/1. V.V. would like to thank EPSRC, QIP IRC, Royal Society and the Wolfson Foundation, National Research Foundation (Singapore) and the Ministry of Education (Singapore) for nancial support.

FIG. 3: Quantum circuit representation of the rened prediction protocol.

Constant Entropy Prediction Protocol. Recall that in the simple prediction protocol, the preparation of the next quantum causal state was based on the result of a measurement in basis |k . Thus, although we can encode the initial conditions of a stochastic process within a system of entropy Cq , the decoding process requires an interim system of entropy C . While this protocol establishes that quantum models require less knowledge of the past, quantum systems implementing this specic prediction protocol still need C bits of memory at some stage during their evolution. This limitation is unnecessary. In this section, we present a more sophisticated protocol whose implementation has entropy Cq at all points of operation. Consider a quantum Tjk |r |k . machine initialized in state |Sj = n k=1 r We refer the subsystem spanned by |r as R1 , and the subsystem spanned by |k as K. To generate correct predictive statistics, we 1. Apply a general quantum operation on K that maps Tjk |r |Sk on any given |Sj to |Sj = n k=1 r R1 R2 K, where R2 is a second Hilbert space of dimension ||. Note that this operation always exists, since it is dened by Krauss operators Bk = |Sk k| that satisfy k Bk Bk = 1. 2. Output R1 . Measurement of R1 in the |r basis leads to a classical output r whose statistics coincide with that of its classical counterpart, x1 . 3. The remaining subsystem R2 K is retained as the initial condition of the quantum -machine at the next timestep. See Fig. 3 for a circuit representation of the protocol. Step (1) does not increase system entropy since entropy is conserved under addition of pure ancilla, while Sj |Sk Sj |Sk for all j, k. Tracing out R1 in step (3) leaves the epsilon machine in state pj |Sj |Sj , which has entropy Cq . Finally, the execution of the protocol does not require knowledge of the measurement result r (In fact, the quantum -machine
(r) (r)

[1] J. P. and Crutcheld, Physica D: Nonlinear Phenomena 75, 11 (1994). [2] A. Ray, Signal Processing 84, 1115 (2004). [3] A. S. Holevo, in Proceedings of the Second Japan USSR Symposium on Probability Theory, edited by G. Maruyama and J. V. Prokhorov (Springer-Verlag, Berlin, 1973), pp. 104119, lecture Notes in Mathematics, vol. 330. [4] J. L. Doob, Stochastic Processes (Wiley, New York, 1953). [5] J. P. Crutcheld and D. P. Feldman, Chaos: An Interdisciplinary Journal of Nonlinear Science 13, 25 (2003). [6] P. Grassberger, International Journal of Theoretical Physics 25, 907 (1986). [7] J. P. Crutcheld and D. P. Feldman, Phys. Rev. E 55, R1239 (1997). [8] G. Tononi, O. Sporns, and G. M. Edelman, Proceedings of the National Academy of Science 91, 5033 (1994). [9] J. P. Crutcheld and K. Young, Phys. Rev. Lett. 63, 105 (1989). [10] C. Rohilla Shalizi and J. P. Crutcheld, Journal of Statistical Physics 104, 817 (2001), arXiv:cond-mat/9907176. [11] J. P. Crutcheld, C. J. Ellison, and J. R. Mahoney, Phys. Rev. Lett. 103, 094101 (2009). [12] C. R. Shalizi, K. L. Shalizi, and R. Haslinger, Phys. Rev. Lett. 93, 118701 (2004). [13] T. K. Chun-Biu Li, Haw Yang, Proceedings of the National Academy of Sciences 105, 536 (2008). [14] M. A. Nielsen and I. L. Chuang, Quantum computation and quantum information (Cambridge University Press, Cambridge, 2000). [15] C. E. Shannon, Bell Sys. Tech. J. 30, 50 (1951). [16] G. S. G. Benenti, G. Casati, Principles of Quantum Information and computation II. (World Scientic, 2007). [17] K. Bostroem and T. Felbinger, Phys. Rev. A 65, 032313 (2002).

You might also like