0% found this document useful (0 votes)
69 views46 pages

Algorithmic Randomness and Complexity: Rod Downey Victoria University Wellington New Zealand

This document discusses algorithmic randomness and complexity. It examines intuitive notions of randomness through examples of binary sequences. It describes three approaches to defining randomness: the statistician's approach using measure theory, the coder's approach using data compression, and the gambler's approach using betting strategies. It discusses how these three approaches give equivalent definitions of algorithmic randomness. It also discusses how algorithmic complexity relates to randomness and computational complexity relates to determining solutions.

Uploaded by

3dman3d
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
69 views46 pages

Algorithmic Randomness and Complexity: Rod Downey Victoria University Wellington New Zealand

This document discusses algorithmic randomness and complexity. It examines intuitive notions of randomness through examples of binary sequences. It describes three approaches to defining randomness: the statistician's approach using measure theory, the coder's approach using data compression, and the gambler's approach using betting strategies. It discusses how these three approaches give equivalent definitions of algorithmic randomness. It also discusses how algorithmic complexity relates to randomness and computational complexity relates to determining solutions.

Uploaded by

3dman3d
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 46

Algorithmic Randomness and Complexity

Rod Downey Victoria University Wellington New Zealand

Melbourne, 2011

Lets begin by examining the title:

Algorithmic Randomness and Complexity

Algorithmic
Etymology : Al-Khwarizmi, Persian astronomer and mathematician, wrote a treatise in 825 AD, On Calculation with Hindu Numerals, together with an error in the Latin translation. What we intuitively mean From a set of basic instructions (ingredients) specify a mechanical method to obtain the desired result.

Already you can see that I plan to be sloppy, but you should try to get the feel of the subject.

Algorithmic

From a set of basic instructions (ingredients) specify a mechanical method to obtain the desired result.

Greatest Common Divisors


The greatest common divisor of two numbers x and y is the biggest number that is a factor of both. For instance, the greatest common divisor, gcd(4,8) is 4. gcd(6,10)=2; gcd(16,13)=1. Euclid, or perhaps Team Euclid, (around 300BC) devised what remains the best algorithm for determining the gcd of two numbers.

Euclids Algorithm
To nd gcd(1001,357). 1001 = 357 2 + 287 357 = 287 1 + 70 287 = 70 4 + 7 70 = 7 10 7=gcd(1001,357).

Computable functions and Churchs Thesis


The notion of a Computable Function can be made precise and was done in the 1930s by people like Church, Gdel, Turing and others. o Became implemented by the work of Turing, von Neumann and others.

Commonly accepted is Churchs Thesis that the intuitively computable functions are the same as those dened by Turing machine (or your favourite programming language, such as JAVA, C++, etc.) Trickier when we talk about complexity theory. (feasible is a subset of polynomial time on a Turing Machine)

Randomness

How dare we speak of the laws of chance? Is not chance the antithesis of all law? Joseph Bertrand, Calcul des Probabilits, 1889 e

Intuitive Randomness

Intuitive Randomness
Which of the following binary sequences seem random? A 000000000000000000000000000000000000000000000000000000000000 B 001101001101001101001101001101001101001101001101001101001101 C 010001101100000101001110010111011100000001001000110100010101 D 001001101101100010001111010100111011001001100000001011010100 E 010101110110111101110010011010110111001101101000011011110111 F 011101111100110110011010010000111111001101100000011011010101 G 000001100010111000100000000101000010110101000000100000000100 H 010100110111101101110101010000010111100000010101110101010001

Intuitive Randomness
Non-randomness: increasingly complex patterns. A 000000000000000000000000000000000000000000000000000000000000 B 001101001101001101001101001101001101001101001101001101001101 C 010001101100000101001110010111011100000001001000110100010101 D 001001101101100010001111010100111011001001100000001011010100 E 010101110110111101110010011010110111001101101000011011110111 F 011101111100110110011010010000111111001101100000011011010101 G 000001100010111000100000000101000010110101000000100000000100 H 010100110111101101110101010000010111100000010101110101010001

Intuitive Randomness
Randomness: bits coming from atmospheric patterns. A 000000000000000000000000000000000000000000000000000000000000 B 001101001101001101001101001101001101001101001101001101001101 C 010001101100000101001110010111011100000001001000110100010101 D 001001101101100010001111010100111011001001100000001011010100 E 010101110110111101110010011010110111001101101000011011110111 F 011101111100110110011010010000111111001101100000011011010101 G 000001100010111000100000000101000010110101000000100000000100 H 010100110111101101110101010000010111100000010101110101010001

Intuitive Randomness
Partial Randomness: mixing random and nonrandom sequences. A 000000000000000000000000000000000000000000000000000000000000 B 001101001101001101001101001101001101001101001101001101001101 C 010001101100000101001110010111011100000001001000110100010101 D 001001101101100010001111010100111011001001100000001011010100 E 010101110110111101110010011010110111001101101000011011110111 F 011101111100110110011010010000111111001101100000011011010101 G 000001100010111000100000000101000010110101000000100000000100 H 010100110111101101110101010000010111100000010101110101010001

Intuitive Randomness
Randomness relative to other measures: biased coins. A 000000000000000000000000000000000000000000000000000000000000 B 001101001101001101001101001101001101001101001101001101001101 C 010001101100000101001110010111011100000001001000110100010101 D 001001101101100010001111010100111011001001100000001011010100 E 010101110110111101110010011010110111001101101000011011110111 F 011101111100110110011010010000111111001101100000011011010101 G 000001100010111000100000000101000010110101000000100000000100 H 010100110111101101110101010000010111100000010101110101010001

Three Approaches to Randomness at an Intuitive Level


The statisticians approach: Deal directly with rare patterns using measure theory. Random sequences should not have eectively rare properties. (von Mises, 1919, nally Martin-Lf 1966) o Computably generated null sets represent eective statistical tests. The coders approach: Rare patterns can be used to compress information. Random sequences should not be compressible (i.e., easily describable) (Kolmogorov, Levin, Chaitin 1960-1970s). Kolomogorov complexity; the complexity of is the length of the shortest description of . The gamblers approach: A betting strategy can exploit rare patterns. Random sequences should be unpredictable. (Solomono, 1961, Scnhorr, 1975, Levin 1970) No eective martingale (betting) can make an innite amount betting of the bits.

The statisticians approach


von Mises, 1919. A random sequence should have as many 0s as 1s. But what about 1010101010101010..... von Mises idea: If you select a subsequence {af (1) , af (2) , . . . } (e.g. f (1) = 3, f (2) = 10, f (3) = 29, 000, so the 3rd, the 10th, the 29,000 th etc) then the number of 0s and 1s divided by the number of 1 elements selected should end to 2 . (Law of Large Numbers) But what selection functions should be allowed? Church: computable selctions. Ville, 1939 showed no countable selection possible. Essentially not enough statistical tests.

Villes Theorem
Theorem (Ville)
Given any countable collection of selection functions, there is a real passing every member of the test yet the number of zeros less than or equal to n in the A n (the rst n bits of the real A) is always less than or equal to the number of 1s.

Martin-Lf o
Martin-Lf, 1966 suggests using shrinking eective null sets as o representing eective tests. Basis of modern eective randomness theory. A c.e. open set is one of the form i (qi , ri ) where {qi : i } and {ri : i } are c.e.. U = {[] : W }. A Martin-Lf test is a uniformly c.e. sequence U1 , U2 , . . . of c.e. open o sets s.t. i((Ui ) 2i ). (Computably shrinking to measure 0) is Martin-Lf random if for every Martin-Lf test, o o /
i>0

Ui .

Universal Tests
Enumerate all c.e. tests, {We,j,s : e, j, s N}, stopping should one threated to exceed its bound. Un = eN We,n+e+1 . A passes this test i it passes all tests. It is a universal martin-Lf o test. (Martin-Lf) o

The Coders Approach


Have a Turing maching U( ) = is a U-desription of . The length of the shortest is the Kolmogorov Complexity of relative to U. CU (). There are universal machines in the sense that for all M, CU () + () =def KM () + dm . C

reals
From this point of view we should have all the initial segments of a real to be random. First try , a real, is random i for all n, C ( n) n d. By complexity oscillations no such real can exist. The reason as is that C lacks the intentional meaning of Komogorov complexity. the bits of encode the information of the bits of . Because C really uses + | | as we know it halts there.

Prex free complexity


K is the same except we use prex-free complexity (Think telephone numbers.) i.e. U( ) halts implies U( ) does not for all comparible (but not equal to) . (Levin, later Schnorr and Chaitin) Now dene is K -random if there is a c s.t. n(K ( n) > n c).

And...
They all give the same class of randoms!

Theorem (Schnorr)
A is Martin-Lf random i A is K -random. o

Similar ideas using martingales were you bet on the nest bit. A is random i no eective marrtingale succeeds in achieving innite winnings betting on the bits of A. f () =
f (0)+f (1) . 2

(fairness)

Many variations depending of sensitivity of the tests. Implementations approximate the truth: ZIP, GZIP, RAR and other text compression programmes. Notice no claims about randomness in nature But very intersting question as to e.g. how much is needed for physics etc. Interesting experiments can be done. E.g. ants. (or children) (Reznikova and Yu, 1986)

and Complexity
How hard is it to compute the solution?

How many steps does the algorithm take?

Two examples.

CDs
Algebraic coding: (Hamming, 1950) (something to be coded) (longer with redundancies for decoding) e.g. parity check 101001010100101. can decide if there is likely a single error. ISBN etc. More complex decoding uses algebra (specically algebra following a long line for Fermats Last Theorem (Kummer, 1840s), and non-solving the quintic (Galois 1830)) Like the Yellow Pages, instead of letting your ngers do the walking, algebra talks the talk. (mixed metaphor) CDs rst takes 00000000. . . 11111111 and amplies to something of length 256, so there are 2256 many possible codewords, which are decoded, and they are in real time.

Beer Delivery
Take a big map and plan a tour to cost the least amount. (Beer delivery, or Travelling Salesman Problem) Yet Beer delivery (TSP) for 256 cities is computationally impossible. No way known except to try all possibilities! (P =?NP) (Cook, 1970, Karp, 1972, Levin-who knows?) Called computational intractability Sometimes intractablity is good e.g. RSA and credit cards. if factorization was easy, modern banking would break down!

Some applications
Using Chaos and randomness enable us to treat dynamical systems like the weather. Replace statistical tools by computational ones. Speeding up algorithms. E.g. supplying primes for things like RSA. (of course open if BPP=P) Phylogeny and language etc evolution (something of a dream). Understanding how levels of randomness relate to performance, etc. Dierential geometry, reverse mathematics, Brownian motion, sampling randoms, etc. (AND misuses such as creationists!)

My work
What is random? What level of randomness is necessary for applications. Suppose I have a source of weak randomness, how can I amplify this to get better randomness? How can we calibrate levels randomness? Among randoms?, Among non-randoms? How does this relate to classical computability notions, which calibrate levels of computational complexity? If a real is random does it have strong or weak computational power?

Randoms should be computationally weak


We now know that there are two kinds of randoms, those which resemble Chaitins = 2K () and more typical ones. (Specically a theorem of Stephan in 2002.)

There has been a lot of popular press about the number of knowledge etc, which is random, but has high computational power.

We would theorize randoms would be stupid: computationally weak.

One example-from music


Stupidity Tests There are two ways to convince someone you are stupid: The rst are random as they pass the stupidity test as they are so smart that they know how to be stupid, the second really are stupid. That is, with sucient randomness, randomness begins to resemble order. This is kind of remarkable. We are still tryintg to understand it. One of the following music examples is aleatoric (or chance) and the other is totally serial (based on a pattern). Which is which?

How Chaos Resembles Order


Highly random objects can resemble highly patterned ones.

A musical example. Excerpt A: from Music of Changes by John Cage Excerpt B: from Structures for Two Pianos by Pierre Boulez

Cages piece is an example of aleatory music. Boulezs piece is an example of total serialism.

Theorem (Stephan)
A random real can compute a DNC function (we say the real has PA degree) i A computes the halting problem. f is DNC i for all x, f (x) = x (x), and f (x) {0, 1}. If we remove the 0,1 restriction then the f is called xed point free and any random can compute one.

Theorem (Barmpalias, Lewis, Ng)


Every PA degee is the join of two random segrees.

Halting probabilities
One would think therefore that has nothing to do with most randoms, but:

Theorem (Downey, Hirschfeldt, Miller, Nies)


Almost every random A is B for some B.

Theorem (Kurtz)
Almost every random A is computably enumerable relative to some B <T A.

K-triviality
Theorem (Chaitin)
If C (A n) + C (n) for all n, then A is computable. This is proven using the fact that a 0 class with a nite number of 1 paths has computable paths, combined with the Counting Theorem { : C () C (n) + d || = n} A2d . (The Loveland Technique)

K-triviality
Theorem (Chaitin)
If C (A n) + C (n) for all n, then A is computable. This is proven using the fact that a 0 class with a nite number of 1 paths has computable paths, combined with the Counting Theorem { : C () C (n) + d || = n} A2d . (The Loveland Technique) What is K (A n) + K (n) for all n? We call such reals K -trivial. Does A K -trivial imply A computable?

K-triviality
Theorem (Solovay)
There are noncomputable K -trivial reals. A = { e, n : s (We,s As = e, n We,s and e,n js 2K (j)[s] < 2(e+2) )}.

Theorem (Downey, Hirschfeldt, Nies, Stephan)


A is K -trivial and noncomputable implies <T A <T , and hence they solve Posts problem.

Want to know more?


My homepage : just type Rod Downey into google. and I am the one who is not the author of gay Lolita. Buy that wonderful book, $57 from Amazon at present. Buy some for your friends.

Thank You

You might also like