0% found this document useful (0 votes)
27 views397 pages

Untitled

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
27 views397 pages

Untitled

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 397

“Four legs good, two legs better ”

A modified version of the


Animal Farm’s Constitution.
“Two logs good, p logs better ”
The original Constitution
of mathematicians.

RAN D O M WALK
IN RANDOM AND

N O N- RAND O M
ENVIRONMENTS
h E C D N D E D I T I O N
This page intentionally left blank
L
RANDEOM WALK
IN RANDOM AND
N
NONN-- RANDOMM
ENVIRONMENTS
ENVIRO
E C O N D EDITION
0

Pal Revesz
Technische Universitat Wien, Austria
Technical University of Budapest, Hungary

N E W JERSEY * LONDON *
10;World
-
SINGAPORE
Scientific
-
BEIJING * SHANGHAI HONG KONG * TAIPEI - CHENNAI
Published by
World Scientific Publishing Co. Pte.Ltd.
5 Toh Tuck Link, Singapore 596224
USA ofice; 27 Warren Street, Suite 401-402, Hackensack, NJ 07601
UK ofice: 57 Shelton Street, Covent Garden, London WC2H 9HE

Library of Congress Cataloging-in-PublicationData


Random walk in random and non-random environments / Pfll RCvCsz.--2nd ed.
p. cm.
Includes bibliographical references and indexes.
ISBN 981-256-361-X (alk. paper)
1. Random walks (Mathematics). I. Title.

QA274.73 .R48 2005


5 19.2’82--dc22
2005045536

British Library Cataloguing-in-PublicationData


A catalogue record for this book is available from the British Library.

Copyright 0 2005 by World Scientific Publishing Co. Pte. Ltd.


All rights reserved. This book, or parts thereof; may not be reproduced in any form or by any means,
electronic or mechanical, including photocopying, recording or any information storage and retrieval
system now known or to be invented, without written permission from the Publisher.

For photocopying of material in this volume, please pay a copying fee through the Copyright
Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, USA. In this case permission to
photocopy is not required from the publisher.

Printed in Singapore by Mainland Press


Preface to the First Edition

“I did not know that it was so dangerous to drink a beer with you. You
write a book with those you drink a beer with,” said Professor Willem Van
Zwet, referring to the preface of the book Csorgo and I wrote (1981) where
it was told that the idea of that book was born in an inn in London over
a beer. In spite of this danger Willem was brave enough t o invite me t o
Leiden in 1984 for a semester and to drink quite a few beers with me there.
In fact I gave a seminar in Leiden, and the handout of that seminar can be
considered as the very first version of this book. I am indebted to Willem
and to the Department of Leiden for a very pleasant time and a number of
useful discussions.
I wrote this book in 1987-89 in Vienna (Technical University) partly sup-
ported by Fonds zur Forderung der Wissenschaftlichen Forschung, Project
Nr. P6076. During these years I had very strong contact with the Math-
ematical Institute of Budapest. I am especially indebted t o Professors E.
Csaki and A. Foldes for long conversations which have a great influence on
the subject of this book. The reader will meet quite often with the name of
P. Erdos, but his role in this book is even greater. Especially most results
of Part I1 are fully or partly due to him, but he had a significant influence
even on those results that appeared under my name only.
Last but not least, I have t o mention the name of M. Csorgo, with whom
I wrote about 30 joint papers in the last 15 years, some of them strongly
connected with the subject of this book.

Vienna, 1989. P. Rkvksz


Technical University of Vienna
Wiedner Hauptstrasse 8-10/107
-4-1040 Vienna
Austria

V
This page intentionally left blank
Preface to the Second Edition

If you write a monograph on a new, just developing subject, then in the


next few years quite a number of brand-new papers are going t o appear
in your subject and your book is going t o be outdated. If you write a
monograph on a very well-developed subject in which nothing new happens,
then it is going t o be outdated already when it is going to appear. In 1989
when I prepared the First Edition of this book it was not clear for me
that its subject was already overdeveloped or it was a still developing area.
A year later Erd6s told me that he had been surprised to see how many
interesting, unsolved problems had appeared in the last few years about the
very classical problem of coin-tossing (random walk on the line). In fact
Erdos himself proposed and solved a number of such problems.
I was happy to see the huge number of new papers (even books) that
have appeared in the last 16 years in this subject. I tried t o collect the
most interesting ones and to fit them in this Second Edition. Many of my
friends helped me to find the most important new results and to discover
some of the mistakes in the First Edition.
My special thanks t o E. CsAki, M. Csorgo”,A. Foldes, D. Khoshnevisan,
Y . Peres, Q. M. Shao, B. T6th, Z. Shi.
Vienna, 2005.

vii
This page intentionally left blank
Contents

Preface to the First Edition V

Preface to the Second Edition vii

Introduction xv

.
I SIMPLE SYMMETRIC RANDOM WALK IN Z’
Notations and abbreviations 3

1 Introduction of Part I 9
1.1 Randomwalk . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.2 Dyadic expansion . . . . . . . . . . . . . . . . . . . . . . . . 10
1.3 Rademacher functions . . . . . . . . . . . . . . . . . . . . . 10
1.4 Coin tossing . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.5 The language of the probabilist . . . . . . . . . . . . . . . . 11

2 Distributions 13
2.1 Exact distributions . . . . . . . . . . . . . . . . . . . . . . . 13
2.2 Limit distributions . . . . . . . . . . . . . . . . . . . . . . . 19

3 Recurrence and the Zero-One Law 23


3.1 Recurrence . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
3.2 The zero-one law . . . . . . . . . . . . . . . . . . . . . . . . 25

4 F’rom the Strong Law of Large Numbers to the Law of


Iterated Logarithm 27
4.1 Borel-Cantelli lemma and Markov inequality . . . . . . . . 27
4.2 The strong law of large numbers . . . . . . . . . . . . . . . 28
4.3 Between the strong law of large numbers and
the law of iterated logarithm . . . . . . . . . . . . . . . . . 29
4.4 The LIL of Khinchine . . . . . . . . . . . . . . . . . . . . . 31

5 Lbvy Classes 33
5.1 Definitions . . . . . . . . . . . . ............... 33
5.2 EFKPLIL . . . . . . . . . . . . . ............... 34
5.3 The laws of Chung and Hirsch . . . . . . . . . . . . . . . . 39
5.4 When will S, be very large? . . . . . . . . . . . . . . . . . . 39

ix
x CONTENTS

5.5 A theorem of Csaki . . . . . . . . . . . . . . . . . . . . . . . 41

6 Wiener Process and Invariance Principle 47


6.1 Four lemmas . . . . . . . . . . . . . . . ........... 47
6.2 Joining of independent random walks . . . . . . . . . . . . . 49
6.3 Definition of the Wiener process . . . . . . . . . . . . . . . 51
6.4 Invariance Principle . . . . . . . . . . . ........... 52

7 Increments 57
7.1 Long head-runs . . . . . . . . . . . . . . . . . . . . . . . . . 57
7.2 The increments of a Wiener process . . . . . . . . . . . . . 66
7.3 The increments of 5 ’ ~. . . . . . . . . . . . . . . . . . . . . 77

8 Strassen Type Theorems 83


8.1 The theorem of Strassen . . . . . . . . . . . . . . . . . . . . a3
8.2 Strassen theorems for increments . . . . . . . . . . . . . . . 90
8.3 The rate of convergence in Strassen’s theorems . . . . . . . 92
8.4 A theorem of Wichura . . . . . . . . . . . . . . . . . . . . . 95

9 Distribution of the Local Time 97


9.1 Exact distributions . . . . . . . . . . . . . . . . . . . . . . . 97
9.2 Limit distributions . . . . . . . . . . . . . . . . . . . . . . . 103
9.3 Definition and distribution of the local time
of a Wiener process . . . . . . . . . . . . . . . . . . . . . . . 104

10 Local Time and Invariance Principle 109


10.1 An invariance principle . . . . . . . . . . . . . . . . . . . . . 109
10.2 A theorem of LBvy . . . . . . . . . . . . . . . . . . . . . . . 111

11 Strong Theorems of the Local Time 117


11.1 Strong theorems for [(z. n) and [(n) . . . . . . . . . . . . . 117
11.2 Increments of V(Z.t ) . . . . . . . . . . . . . . . . . . . . . . 119
11.3 Increments of <(z.n) . . . . . . . . . . . . . . . . . . . . . . 123
11.4 Strassen type theorems . . . . . . . . . . . . . . . . . . . . . 124
11.5 Stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126

12 Excursions 135
12.1 On the distribution of the zeros of a random walk . . . . . . 135
12.2 Local time and the number of long excursions
(Mesure du voisinage) . . . . . . . . . . . . . . . . . . . . . 141
12.3 Local time and the number of high excursions . . . . . . . . 146
12.4 The local time of high excursions . . . . . . . . . . . . . . . 147
12.5 How many times can a random walk reach its maximum? . 152
CONTENTS xi

13 F'requently and Rarely Visited Sites 157


13.1 Favourite sites . . . . . . . . . . . . . . . . . . . . . . . . . 157
13.2 Rarely visited sites . . . . . . . . . . . . . . . . . . . . . . . 161

14 An Embedding Theorem 163


14.1 On the Wiener sheet . . . . . . . . . . . . . . . . . . . . . . 163
14.2 The theorem . . . . . . . . . . . . . . . . . . . . . . . . . . 164
14.3 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . 168

15 A Few Further Results 171


15.1 On the location of the maximum of a random walk . . . . . 171
15.2 On the location of the last zero . . . . . . . . . . . . . . . . 175
15.3 The Ornstein-Uhlenbeck process and
a theorem of Darling and Erd6s . . . . . . . . . . . . . . . . 179
15.4 A discrete version of the It6 formula . . . . . . . . . . . . . 183

16 Summary of Part I 187

I1. SIMPLE SYMMETRIC RANDOM WALK IN Z d

Notations 191

17 The Recurrence Theorem 193

18 Wiener Process and Invariance Principle 203

19 The Law of Iterated Logarithm 207

20 Local Time 211


20.1 ( ( 0 .n) in Z 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . 211
20.2 [ ( n )in Z d . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218
20.3 A few further results . . . . . . . . . . . . . . . . . . . . . . 220

21 The Range 221


21.1 The strong law of large numbers . . . . . . . . . . . . . . . 221
21.2 CLT. LIL and Invariance Principle . . . . . . . . . . . . . . 225
2 1.3 Wiener sausage . . . . . . . . . . . . . . . . . . . . . . . . . 226

22 Heavy Points and Heavy Balls 227


22.1 The number of heavy points . . . . . . . . . . . . . . . . . . 227
22.2 Heavy balls . . . . . . . . . . . . . . . . ........... 236
22.3 Heavy balls around heavy points . . . . . . . . . . . . . . . 239
22.4 Wiener process . . . . . . . . . . . . . . ........... 240
xii CONTENTS

23 Crossing and Self-crossing 241

24 Large Covered Balls 245


24.1 Completely covered discs centered in the origin of Z2 . . . . 245
24.2 Completely covered disc in iZ2 with arbitrary centre . . . . 263
24.3 Almost covered disc centred in the origin of Z2 . . . . . . . 264
24.4 Discs covered with positive density in 2’ . . . . . . . . . . . 265
24.5 Completely covered balls in Z d . . . . . . . . . . . . . . . . 272
24.6 Large empty balls . . . . . . . . . . . . . . . . . . . . . . . . 277
24.7 Summary of Chapter 24 . . . . . . . . . . . . . . . . . . . . 280

25 Long Excursions 281


25.1 Long excursions in Z2 . . . . . . . . . . . . . . . . . . . . . 281
25.2 Long excursions in high dimension . . . . . . . . . . . . . . 284

26 Speed of Escape 287

27 A Few Further Problems 293


27.1 On the Dirichlet problem . . . . . . . . . . . . . . . . . . . 293
27.2 DLA model . . . . . . . . . . . . . . . . . . . . . . . . . . . 296
27.3 Percolation . . . . . . . . . . . . . . . . . . . . . . . . . . . 297

.
I11 RANDOM WALK IN RANDOM ENVIRONMENT

Not ations 301

28 Introduction 303

29 In the First Six Days 307

30 After the Sixth Day 311


30.1 The recurrence theorem of Solomon . . . . . . . . . . . . . 311
30.2 Guess how far the particle is going away in an RE . . . . . 313
30.3 A prediction of the Lord . . . . . . . . . . . . . . . . . . . . 314
30.4 A prediction of the physicist . . . . . . . . . . . . . . . . . . 326

31 What Can a Physicist Say About the Local Time <(O. n)? 329
31.1 Two further lemmas on the environment . . . . . . . . . . . 329
31.2 On the local time [ ( O . n ) . . . . . . . . . . . . . . . . . . . . 330
...
CONTENTS Xlll

32 On the Favourite Value of the RWIRE 337

33 A Few Further Problems 345


33.1 Two theorems of Golosov . . . . . . . . . . . . . . . . ... 345
33.2 Non-nearest-neighbour random walk . . . . . . . . . . .. . 347
33.3 RWIRE in Z d . . . . . . . . . . . . . . . . . . . . ... ... 348
33.4 Non-independent environments . . . . . . . . . . . . . ... 350
33.5 Random walk in random scenery . . . . . . . .. ... . .. 350
33.6 Random environment and random scenery . .. . . . . ... 353
33.7 Reinforced random walk . . . . . . . . . . . . . . . . . ... 353

References 357

Author Index 375

Subject Index 379


This page intentionally left blank
Introduction

The first examinee is saying: Sir, I did not have time enough to study
everything but I learned very carefully the first chapter of your handout.
Very good - says the professor - you will be a great specialist. You
know what a specialist is. A specialist knows more and more about less
and less. Finally he knows everything about nothing.
The second examinee is saying: Sir, I did not have enough time but I
read your handout without taking care of the details. Very good - answers
the professor - you will be a great polymath. You know what a polymath
is. A polymath knows less and less about more and more. Finally he knows
nothing about everything.
Recalling this old joke and realizing that the biggest part of this book
is devoted to the study of the properties of the simple symmetric random
walk (or equivalently, coin tossing) the reader might say that this is a book
for specialists written by a specialist. The most trivial plea of the author is
to say that this book does not tell everything about coin tossing and even
the author does not know everything about it. Seriously speaking I wish to
explain my reasons for writing such a book.
You know that the first probabilists (Bernoulli, Pascal, etc.) investi-
gated the properties of coin tossing sequences and other simple games only.
Later on the progress of the probability theory went into two different di-
rections:
(i) to find newer and deeper properties of the coin tossing sequence,
(ii) to generalize the results known for a coin tossing sequence to more
complicated sequences or processes.
Nowadays the second direction is much more popular than the first one.
In spite of this fact this book mostly follows direction (i).
I hope that:
(a) using the advantage of the simple situation coming from concen-
trating on coin tossing sequences, the reader becomes familiar with the
problems, results and partly the methods of proof of probability theory,
especially those of the limit theorems, without suffering too much from
technical tools and difficulties,
(b) since the random walk (especially in Z d ) is the simplest mathemati-
cal model of the Brownian motion, the reader can find a simple way to the
problems (at least to the classical problems) of statistical physics,

xv
xvi INTRODUCTION

(c) since it is nearly impossible to give a more or less complete picture of


the properties of the random walk without studying the analogous proper-
ties of the Wiener process] the reader can find a simple way to the study of
the stochastic processes and should learn that it is impossible t o go deeply
in direction (i) without going a bit in direction (ii),
(d) any reader having any degree in math can understand the book, and
reading the book can get an overall picture about random phenomena] and
the readers having some knowledge in probability can get a better overview
of the recent problems and results of this part of the probability theory,
(e) some parts of this book can be used in any introductory or advanced
probability course.
The main aim of this book is to collect and compare the results - mostly
strong theorems - which describe the properties of a simple symmetric
random walk. The proofs are not always presented. In some cases more
proofs are given, in some cases none. The proofs are omitted when they
can be obtained by routine methods and when they are too long and too
technical. In both cases the reader can find the exact reference t o the
location of the (or of a) proof.
“The earth was without form and void, and dark-
ness was upon the face of the deep.”
The First Book of Moses

I. SIMPLE SYMMETRIC
RANDOM WALK IN Z1
This page intentionally left blank
Notations and abbreviations

Not ations
General notations
1. X I ,Xa,. . . is a sequence of independent, identically distributed ran-
dom variables with

P{XI = l} = P{XI = -1} = 1/2.

2. so = o,sn= S ( n ) = x1 -f- x, + . . + xn( n = 1 , 2 , . . .).


'

{ S n } is the (simple symmetric) random walk.


3 M L = M + ( n ) = max Sk,
O<k<n

M i = M - ( n ) = - min Sk,
O<k<n

M n = M ( n ) = max lskl = max(Mz,M;),


O<k<n

M: = M: + M l , Yn = M z - S,.
4. { W ( t ) ; 2
t 0) is a Wiener process (cf. Section 6.3).

5. m+(t)= sup W ( s ) ,
o<s<t

m-(t) = - inf W ( s ) ,
ossit

m(t)= sup JW(s)l= max(m+(t),m-(t)) (t 2 o),


ojs<t

m*(t)= m+(t)+ m-(t),


y ( t ) = m+(t)- W ( t ) .

6. b, = b ( n ) = (2nloglogn)-1/2,
Tn = r ( n , a ) = (2a (log; n + loglogn)) -ll2.
7. [x]is the largest integer less than or equal to x.

8.

3
4 I . SIMPLE S Y M M E T R I C R A N D O M WALK I N Z’

9.

10.

-
11. Sometimes we use the notation f(n) g ( n ) without any exact math-
emat,ical meaning, just saying that f(n) and g ( n ) are close to each
other in some sense.

12. a(.)
1
=- 1 ”
-03
e-U2/2duis the standard normal distribution func-
tion.

13. N E N(rn,a) +j P{o-l(N - rn) < x} = @(x).


14. #{. . .) = I{. . .}I is the cardinality of the set in the bracket.

15. Rd resp. Z d is the d-dimensional Euclidean space resp. its integer


grid.

16. B = Bd is the set of Borel-measurable sets of Rd.


17. A(.) is the Lebesgue measure on EXd.

18. log, ( p = 1 , 2 , . . .) is p t h iterated of log and lg resp. lg, is the


logarithm resp. p t h iterated of the logarithm of base 2.

19. Let {U,} and {Vn}be two sequences of random variables.


2,
{U,, n = 1 , 2 , . . .} = {V, n = 1 , 2 , . . .) if the finite dimensional dis-
tributions of {U,} are equal to the corresponding finite dimensional
distributions of { Vn}.

Notations to the increments


NOTATIONS A N D A B B R E V I A T I O N S 5

6 . J l ( t ,U ) = SUP ( W ( S+ U ) - W ( S ) ) ,
Ossst-a

7. J2(t,u) = sup IW(s


O<s<t-a
+ a) - W ( s ) l ,
8. J 3 ( t , ~=
) SUP SUP ( W ( S+ U ) - W ( S ) ) ,
O<s<t-a Osu<a

9. J 4 ( t , u ) = sup sup IW(s + u ) - W ( s ) l ,


O<s<t-a O<u<a

11. 2, is the largest integer for which I1(n,Zn)= Z,,i.e. 2, is the


length of the longest run of pure heads in n Bernoulli trials.

Notations to the Strassen-type theorems

2. Wt(X) =btW(tz) (0 5 II: 5 1, t > O),


3 . C(0,l) is the set of continuous functions defined on the interval [0,1],

4. S ( 0 , l ) is the Strassen’s class, containing those functions f(.) E C(0,l)


for which f(0) = 0 and J;(~‘(x))’~x5 1.

Notations to the local time


1. c$(z,n)= #{k : 0 < k 5 72, sk = X} (X = 0, fl,f 2 , . . . , n = 1 , 2 , . . .)
is the local time of the random walk {sk}.
For any A C Z1 we define
the occupation time =(A,n ) = CzEA <(x,n).
2. V(X, t ) (-co < x < +co, t 2 0) is the local time of W ( . )(cf. Section
9.3).

3. H ( A , t) = X{s : 0 5 s 5 t , W ( s )E A } (A c R1 is a Bore1 set, t 2 0)


is the occupation time of W ( . )(cf. Section 9.3).

4. Consider those values of k for which Sk = 0. Let these values in


increasing order be 0 = po < pl < p2 < . . . , i.e. p1 = min{k :
k > 0, 5’1,= 0}, p2 = min{k : k > p1, Sk = 0} , . . . , pn = min{k :
k > pn-1, sk =o}, . ...
6 I . SIMPLE S Y M M E T R I C R A N D O M W A L K IN Z'

5. For any z = 0, *l, k 2 , . . . consider those values of k for which S k = z.


Let these values in increasing order be 0 < PI(%) < p2(z) < . . . i.e.
p l ( z ) = min{k : k > 0,Sk = z ) , p 2 ( z ) = min{k : k > p I ( z ) , S k =
x}, . . . ,p,(z) = min{k : k > p,-I(z), Sk = z} . . . Clearly pi(0) = pi.
In case of a Wiener process define p: = inf{t : t 2 O,q(O,t)2 u).
6. [ ( n )= max, [(z, n ) .

7. dt)= SUP, q(z1 t>.

8. The random sequences

S17 . . Sp, }I
El = {SO, E2 = { s p l ] S p l + l , . . . ,S p Z } 1 . . .
are called the first, second, ... excursions (away from 0) of the random
walk { S k } .
9. The random sequences

El(X) = { ~ p , ( , ) , s p , ( z ) + l ~ '* ~? S P Z ( Z ) h
E2(z) = '.
{ S p 2 ( z ) ,s p z ( Z ) + l l ~ 7 Sp3(z)I1.. .
are called the first, second, ... excursions away from z of the random
walk { S k } .
10. For any t > 0 let a ( t )= S U ~ { T : T < t , W ( T )= 0) and P ( t ) = inf{.r :
T > t , W ( T )= 0). Then the path { W t ( s ) ; a ( t5) s 5 P ( t ) ) is called
an excursion of W ( . ) .
11. c, is the number of those terms of S1, S2, . . . , S, which are positive
or which are equal to 0 but the preceding term of which is positive.
12. O ( n ) = # { k : 15 Ic 5 n, Sk-lSk+l < 0) is the number of crossings.
13. R(n) = max{k : k > 1 for which there exists a 0 < j < n - k such
that c ( O , j + k ) = ( ( 0 , j ) ) is the length of the longest zero-free interval.
14. r ( t ) = sup{s : s > 0 for which there exists a 0 < u < t - s such that
7d0,21+ s) = v(0,u)}.
15. Q ( n )= max{k : 0 5 k 5 n , Sk = 0) is the location of the last zero
up to n.
16. $ ( t ) = sup{s : 0 < s 5 t , W ( s )= 0).
17. R(n) = max{k : k > 1 for which there exists a 0 < j < n - k such
that M + ( j+ k ) = M + ( j ) } is the length of the longest flat interval of
M: up to n.
NOTATIONS A N D ABBREVIATIONS 7

18. +(t)= sup{s : s > 0 for which there exists a 0 < u < t - s such that
+
m+(u s) = m+(.)}.
19. R*(n)= max{k : k > 1 for which there exists a 0 <j < n -k such
+
that M ( j Ic) = M ( j ) } .

20. r*(t)= sup{s : s > 0 for which there exists a 0 < u < t - s such that
m(u + s) = m(u)}.
21. p ( n ) is the location of the maximum of the absolute value of a random
walk { S k } up to n, i.e. p ( n ) is defined by S ( p ( n ) ) = M ( n ) and
p ( n ) 5 n. If there are more integers satisfying the above conditions
then the smallest one will be considered as p ( n ) .
22. M ( t ) = inf{s : 0 < s 5 t for which W ( s )= m ( t ) } .
23. p + ( n ) = inf{k : 0 5 k 5 n for which S ( k ) = M + ( n ) } .
24. M f ( t )= inf{s : 0 < s 5 t for which W ( S )= m+(t)}.
25. x(n)is the number of those places where the maximum of the random
walk So, 4 ,. . . ,S, is reached, i.e. x(n)is the largest positive integer
for which there exists a sequence of integers 0 5 kl < Icz < . . . <
Icx(n) 5 n such that

S(lcl) = S(lC2) = . . . = s(kx(n))


= MC(n).

Abbreviations
1. r.v. = random variable,
2. i.i.d.r.v.’s = independent, identically distributed r.v.’s,

3. LIL = law of iterated logarithm,

4. UUC, ULC, LUC, LLC, AD, QAD (cf. Section 5.1),


5. i.0. = infinitely often,

6. a.s. = almost surely.


This page intentionally left blank
Chapter 1

Introduction of Part I
The problems and results of the theory of simple symmetric random walk
in Z1 can be presented using different languages. The physicist will talk
about random walk or Brownian motion on the line. (We use the expression
“Brownian motion” in this book only in a non-well-defined physical sense
and we will say that the simple symmetric random walk or the Wiener
process are mathematical models of the Brownian motion.) The number
theorist will talk about dyadic expansions of the elements of [0,1]. The
people interested in orthogonal series like to formulate the results in the
language of Rademacher functions. The gambler will talk about coin toss-
ing and his gain. And a probabilist will consider independent, identically
distributed random variables and the partial sums of those.
Mathematically speaking all of these formulations are equivalent. In
order to explain the grammar of these languages in this Introduction we
present a few of our notations and problems using the different languages.
However, later on mostly the “language of the physicist and that of the
probabilist” will be used.

1.1 Random walk


Consider a particle making a random walk (Brownian motion) on the real
line. Suppose that the particle starts from II: = 0 and moves one unit to
the left with probability 1/2 and one unit to the right with probability 1/2
during one time unit. In the next step it moves one step to the left or t o
the right with equal probabilities independently from its location after the
first step. Continuing this procedure we obtain a random walk that is the
simplest mathematical model of the linear Brownian motion.
Let S, be the location of the particle after n steps or in time n. This
model clearly implies that

P{S,+I = in+l I s,= in, sn-1 = i,-l,. . . , s1 = il, s o = io = 0)


= P{S,+I = in+l I s, = in} = 1 / 2 (1.1)

where io = 0, i l , iz, . . . , in, in+l is a sequence of integers with J i l - i 0 J =


J i 2 - ill = . . . = lin+l - i,l = 1. It is also natural to ask: how far does the
particle go away (resp. going away to the right or to the left) during the

9
10 CHAPTER 1

first n steps. It means that we consider


Mn = max
O<kLn
Isk( resp. M L = Omax
<k<n
s k or M L = - Omin
<k<_n
s k .

1.2 Dyadic expansion


Let z be any real number in the interval [0,1] and consider its dyadic
expansion
03

x = 0, &l&Z . . . = &k2-'
k=l

where ~i = ~i(z)
(i = 1 , 2 , .. .) is equal to 0 or 1. In fact
~i = [2iz] (mod 2).

Observe that
X { X : ~ j 1 ( ~ ) = 6 ~1 ,j z ( ~ ) = .6. .~, ~, j , ( 2 ) = 6 , } = 2 - , (1.2)
where 1 5 j 1 < j z < . . . < j,;n = 1 , 2 , . . . ; 61,62,. . . ,6, is an arbitrary
sequence of 0's and +l's and X is the Lebesgue measure. Let SO= So(z)= 0
and S, = Sn(z)= n - 2 Cy=l~i(z) ( n = 1 , 2 , . . .). Then (1.2) implies

A{x : Sn+l = Sn = in,. . . ,sI - '


- 21,
s0 --'20) = 2-(n+1) (1.3)
where io = 0, i l , i2,. . . , in+l is a sequence of integers with lil - iol =
liz - i l l = . . . = lin+l - in1 = 1. Clearly (1.3) is equivalent t o (1.1). Hence
any theorem proved for a random walk can be translated to a theorem on
dyadic expansion.
A number theorist is interested in the frequency N,(s)= Cy=lE ~ ( of z)
the ones among the first n digits of z E [0,1]. Since N,(z) = ( n- Sn(z))/2
any theorem formulated for S, implies a corresponding theorem for N n ( z ) .

1.3 Rademacher functions


In the theory of orthogonal series the following sequence of functions is
well-known. Let
1 if z E [0,1/2),
-1 if z E [1/2,1],
1 if z E [0,1/4) U [1/2,3/4),
-1 if z E [1/4,1/2) U [3/4,1],
1 if z E [0,1/8) U [1/4,3/8) U [1/2,5/8) u [3/4,7/8),
-1 if z E [1/8,1/4) U [3/8,1/2) U [5/8,3/4) u [7/8,1],. . .
INTRODUCTION OF P A R T I 11

An equivalent definition, by dyadic expansion, is


r,(z) = 1 - 2&,(2).
The functions (z), TZ (z), . . . are called Rademacher functions. It is a
sequence of orthonormed functions, i.e.

Observe that
1’ ri(x)rj(x)dx =
1 if
0 if
i =j,
i # j.

X{z: Tjl(Z) = & , r&) =&, . . . ,Tj&) = s , > = 2 - , (1.4)


where 1 5 j l < j z < . . . < j , (n = 1 , 2 , . . .); & , & , . . . ,6, is an arbitrary
sequence of +l’s and -1’s and X is the Lebesgue measure. Putting So =
So(z)= 0 and S, = S,(z) = Cy=l ri(z) ( n = 1 , 2 , . . .) we obtain (1.3).

1.4 Coin tossing


Two gamblers (A and B) are tossing a coin. A wins one dollar if the tossing
results in a head and B wins one dollar if the result is tail. Let S, be
the amount gained by A (in dollars) after n tossings. (Clearly S, can be
negative and So = 0 by definition.) Then SN satisfies (1.1) if the game is
fair, i.e. the coin is regular.

1.5 The language of the probabilist


Let X 1 ,X z , . . . be a sequence of i.i.d.r.v.’s with
P{Xi = 1) = P{Xi = -1} = 1/2 (i = 1 , 2 , . . .),
i.e.
P{Xj., = 61,Xj, = &?,,. , ,xj,= 6,) = 2-, (1.5)
where 1 5 j , < jz < . . . < j,, ( n = 1 , 2 , . . .) and & , S z , . . . ,6, is an
arbitrary sequence of +l’s and -1’s. Let
n
So=o and s,=cxk ( n = 1 , 2 , ...).
k=l

Then (1.5) implies that {S,} is a Markov chain, i.e.


P{S,+1 = in+l 1 s, = in, S,-l = & - I , . . . ,Sl = i l , so = io = 0 )
= P{S,+1 = in+l I s, = in} = 1/2 (1.6)
where i,-, = 0, i l , i 2 , . . . ,in,in+l is a sequence of integers with Jil - iol =
t i 2 - ill = . . . = (in+l- in( = 1.
This page intentionally left blank
Chapter 2

Distributions

2.1 Exact distributions


A trivial combinatorial argument gives
THEOREM 2.1

(2.1)

where Ic = -n, -n + 1,. . ,n; n = 1 , 2 , . . .,


,

P{S2,+1 = 2k + 1) = (2.2)

where k = -n - 1,- 7 2 , . . . , n; n = 1 , 2 , . . .. (2.1) and (2.2) together give

(2.3)
lo otherwise

where k = -n, -n + 1,.. . , n; n = 1 , 2 , .. .. Further, for any n = 1 , 2 , .. . ,


tE rw]- we have

(2.4)

The following inequality (Bernstein inequality) can also be obtained by


elementary methods:

THEOREM 2.2 (cf. e.g. Rknyi, 1970/B, p. 387)

for any n = 1 , 2 , . . . and 0 <E 5 1/4.

13
14 CHAPTER 2

For later reference we present also a slightly more general form of the
Bernstein inequality.
THEOREM 2.3 (cf. R h y i , 1970/B, p. 387). Let X ; , X;, . . . be a se-
quence of i.i.d.r.u. with
P{X,. = 1) = 1 - P{x; = 0) = p .
T h e n f o r any 0 < E 5 pq we have

where S: = X; +X,* +.. . +X: and q = 1 - p .


A slightly more precise form of the above Theorem is the so-called
L A R G E D E V I A T I O N THEOREM (cf. Durrett, 1991, p. 61). Let
O < a < 1. T h e n

lim n-l logP{S: 2 nu} = --


n+cc 2
4(1 + a ) -
a log -
p(1-a)
1 log -
2
4P9
1-$‘
+
THEOREM 2.4 (cf. e.g. Rknyi, 1970/A, p. 233).
Pn,k = p{ML = k )
n
2-n ( k = 0 , 1 , 2 , . . . , n ; n = 1,2,. . .), (2.5)

(2.6)

Proof 1 of (2.5). (Renyi, 1970/A, p. 233). Let


k

Then

(2.7)
DISTRIBUTIONS 15

Similarly for k = 0
1
~ n + l , o= Pix1 = -1, ML 5 1) = Z(pn,l +pn,o). (2.8)

Since p l , =
~ p1,l = 1/2 we get (2.5) from (2.7) and (2.8) by induction.

P r o o f 2 of (2.5). Clearly

P{ML 2 k } = P{Sn 2 k} + P{S, < k,M,+ 2 k}

jzn (mod2)

Let

p l ( k ) = min{l : Sl = k } ,

i.e. S:') for 1 > p l ( k ) is the reflection of Sl in the mirror y = k . (Hence


the method of this proof is called reflection principle.) Then

j=n (mod 2)

(2.9)

which proves (2.5).


(2.6) can be obtained by a direct calculation.
16 CHAPTER 2

THEOREM 2.5 For any integers a 5 0 5 b, a < b, a 5 v 5 b we have

p,(a, b, V ) = P{u < - M i 5 M: < b, S, = v}


03 M

= C qn(v + 2k(b - a ) ) - C qn(2b - v + 2k(b - a ) ) (2.10)


k=-03 k=-oo

where

(j= -72, -n + 1 , .. . , n; n = 0 , 1 , . . .).


P r o o f . (Billingsley, 1968, p. 78). In case n = 0

> 0,
” = { 01 if v = O a n d a 2 + b 2
otherwise,

and we obtain (2.10) easily. Assume that (2.10) holds for n - 1 and for
any a , b, v satisfying the conditions of the Theorem. Now we prove (2.10)
by induction. Note that p,(O, b, v) = p n ( a , 0, v) = 0 and the same is true
for the righthand side of (2.10) (since the terms cancel because q n ( j ) =
q , ( - j ) ) . Hence we may assume that a < 0 < b. But in this case a 1 5 0 +
and b - 1 _> 0. Hence by induction (2.10) holds with parameters n - 1, a +
+
1,b 1,v and n - 1,a - 1,b - I,v. We obtain (2.10) observing that

and

THEOREM 2.6 For any integers a 5 0 5 b and a 5 u 5 v 5 b we have

P{u < - M i _< M Z <b , <~ S, < U}


M

= P{u + 2k(b - a ) < S, < v + 2k(b - a ) }


k - 0 0

w
- P{2b - v + 2k(b - a ) < S, < 2b - u + 2k(b - a ) } , (2.11)
k=-m
DISTRIBUTIONS 17

P{a < - M i 5 111,' < b }


03

= P{a + 2k(b - a ) < S, < b + 2k(b - a ) }


k=-m

- c
k=-rn
03

P{b + 2k(b - a ) < s, < 2b - a + 2k(b - u ) } (2.12)

and

P{M, < b } = c
k=-m
03

P((4k - 1 ) b < s, < (4k + 1)b)

- c k=-m
03

P((4k + l ) b < s, < (4k + 3)b}. (2.13)

(2.11) is a simple consequence of (2.10), (2.12) follows from (2.11) taking


2~= a , Y = b and (2.13) follows from (2.12) taking a = -b.
To evaluate the distribution of I i ( n , a ) (i = 1 , 2 , 3 , 4 , 5 ) seems to be
very hard (cf. Notations to the Increments). However, we can get some
information about the distribution of I1 (n,a ) .

LEMMA 2.1 (Erd6s - Rkvksz, 1976).

j+2
p ( n + j , n ) := P{Il(n + j , n ) = n } = -
2n+l ( j = 0,172,. . . , n )

+
Clearly p ( n j , n ) is the probability that a coin tossing sequence of
+
length n j contains a pure-head-run of length n.

Proof. Let

A = { I l ( n+ j , n ) = n } and Ak = {Sk+, - Sk = n}.

Then

A = A0 + Ao.41 + &A1 A2 + . . . + AoAl . . . A j _ l A j


= AO AoAi + + AiAz + . + Aj-IAj.
*.

Since P{Ao} = 2-, and P{AoAl . . . AjAj+l} = 2-,-' for any j = 1 , 2 , .. . ,


we have the Lemma.
The next recursion can be obtained in a similar way.
18 CHAPTER 2

L E M M A 2.2 For any j = 1 , 2 , .. . we have

P{I1(2n + j ,n) = n}

In case j 5 n we obtain

In some cases it is worthwhile t o have a less exact but simpler formula.


For example, we have

L E M M A 2.3 (Deheuvels - Erd6s - Grill - Rkvesz, 1987).

(j+2)2-n-l- (j+2)22-2n-2 <


- P {Ii(n+j,n)= n } 5 (j+2)2-"-l (2.14)

for any n = 1 , 2 , . . .; j = 1 , 2 , . . ..

The idea of the proof is the same as those of the above two lemmas.
The details are omitted.
The exact distribution of 2, (cf. Notations to the Increments) is also
known, namely:

THEOREM 2.7 (Szkkely - Tusnady, 1979).

P{Zn < s} = 2-97>)

where

Remark 1. Csaki, Foldes and Koml6s (1987) worked out a very general
method to obtain inequalities like (2.14). Their method gives a somewhat
weaker result than (2.14). However, their result is also strong enough to
produce most of the strong theorems given later (cf. Section 7.3).
DISTRIBUTIONS 19

2.2 Limit distributions


Utilizing the Stirling formula

(where 0 < g n < 1) and the results of Section 2.1, the following limit
theorems can be obtained.

THEOREM 2.8 (e.g. Renyi, 1970/A, p. 208). Assume that for svme
0 < E < 1 / 2 the inequality En < k < (1- E)n is satisfied. Then

where K = k/n and d ( K ) = Klog2K + (1- K ) log2(1 - K ) . If we also


assume that I k - n/2 I= o(n2I3) then

Especially

The next theorem is the so-called Central Limit Theorem.

THEOREM 2.9 (Gnedenko - Kolmogorov, 1954, 840).

sup lP{n-112Sn < 2) - +((.)I 5 2n-I”.


X

A stronger version of Theorem 2.9 is another form of the Large Deviation


Theorem:

THEOREM 2.10 (e.g. Feller, 1966, p. 517).

provided that 0 < xn = o(n1/6).

Theorem 2.10 can be generalized as follows:


20 CHAPTER 2

THEOREM 2.11 (e.g. Feller, 1966, p. 517). Let X ; , X ; ,... be a se-


quence of i.i.d.r.v. 's with

EXz*= 0, E(XZ*)' = 1, E(exp(tX%*))< co

for all t in some interval 1 t I< to. Then

provided that 0 < x, = o(n1/6) where 5'; = X ; + X,*+ . . . + X:


THEOREM 2.12 (e.g. Rknyi, 1970/A, p. 234).

n+w
lim P{~-~/~M Z = P{INI< x) = 2+(x) - I
< x}

uniformly in z E R1 where N E N ( 0 , l ) . Further,

lim E ( ~ - ~ / ' M z )= (2/7r)'/'.


n-+w

THEOREM 2.13

lim P{n-'/'M,
n+oo
< x} = G(x) = H ( x )

uniformly in x E R' . Further,

P{n-'/2Mn > x,} P{n-1/2Mn < xi'}


lim = lim =1
1-G(xn) n+w H(G')

where

G(z) = -
6 -xk=-cc jZ (-1)kexp (- (u - 2kx)2

H(x)= - -
7r 2k+1 8x2
k=O

provided that 0 < x, = o(n1l6). Consequently for any E >0


P{n-'I2Mn > z,} 2 (1 - & ) ( I- G(z,))

(2.15)
DISTRIBUTIONS 21

<(l+E)--e 4 1 -x;/2
, (2.16)
xn

(2.17)

and

> 8 x ~ )- j1e x p (--g-z:)]


4(1 - E ) [exp ( - 7r2
- ____ 97r2 (2.18)
7r

if 0 < x n = o(n116)and n is large enough.

Remark 1. As we claimed G ( x ) = H ( x ) however in Theorem 2.13 the


asymptotic distribution in the form of G(.) is proposed to be used when x
is large. When z is small, H ( . ) is more adequate.
Finally we present the limit distribution of 2,.
THEOREM 2.14 (Foldes, 1975, Goncharov, 1944). For any positive in-
teger k we have

P{Z, - [ I ~ N<] I C )= exp(-2-("')-QgN) 1 + 41)


where {lg N } = lg N - [lg N].
Remark 2. As we have mentioned earlier the above Theorems can be
proved using the analogous exact theorems and the Stirling formula. Indeed
this method (at least theoretically) is always applicable, but it often requires
very hard work. Hence sometimes it is more convenient to use characteristic
functions or other analytic methods.
This page intentionally left blank
Chapter 3

Recurrence and the Zero-One Law

3.1 Recurrence
One of the most classical strong theorems on random walk claims that the
particle returns to the origin infinitely often with probability 1. That is
RECURRENCE THEOREM (Pblya, 1921).
P{S, = 0 i.0.) = 1.
We present three proofs of this theorem. The first one is based on the
following lemma:
LEMMA 3.1 Let 0 5 i 5 k. T h e n f o r a n y m 2 i w e have

P(0, i, k)
= P{min{j : j 2 m, Sj = 0) < min{j :j 2 m, Sj = k} I S, = i}
= k - l ( k - i), (3.1)
i.e. the probability that a particle starting fromi hits 0 before k is k-' ( k - i ) .
Proof. Clearly we have

p(O,O, k) = 1, p ( 0 , Ic, 5) = 0.
When the particle is located in i then it hits 0 before k if
(i) either it goes to i - 1 (with probability 1/2) and from i - 1 goes to 0
before k (with probability p(0,i - 1,k)),
+
(ii) or it goes to i 1 (with probability 1/2) and from i + 1 goes to 0
+
before k (with probability p ( 0 ,i 1,k ) ) .
That is
1 1
p(O,i,k) = zp(0,i - 1,k) + -p(O,i
2
+ 1, k)
( i = 1 , 2 , . . . , k - 1). Hence p(O,i,k) is a linear function of i, being 1 in 0
and 0 in Ic, which implies (3.1).

23
24 CHAPTER 3

Proof 1 of the Recurrence Theorem. Assume that Sl= 1, say. By


(3.1) for any E > 0 there exists a positive integer no = no(&)such that
p(O,l,n) = 1 - 1 / n 2 1 - E if n 2 no. Consequently the probability that
the particle returns to 0 is larger than 1 - E for any E > 0. Hence the
particle returns to 0 with probability 1 at least once. Having one return,
the probability of a second return is again 1. In turn it implies that the
particle returns to 0 infinitely often with probability 1.

Proof 2 of the Recurrence Theorem. Introduce the following notations

00

(Note that q2k is the probability of the event that the first return of the
particle to the origin occurs in the (2k)-th step but not before.)
Since P2k M (7rk)-1/2(cf. Theorems 2.1 and 2.8) we have

Observe that

and

Hence
RECURRENCE A N D T H E ZERO-ONE LAW 25

Multiplying the k-th equation by z2k (121 < 1) and summing up t o


infinity we obtain
P ( z ) = J'(z)Q(z) + 1,
i.e.
1
Q ( z ) = 1 - - and lim Q ( z ) = 1.
P(z) z71
Since Q(1) = cr=l q 2 k = 1 is the probability that the particle returns t o
the origin at least once, we obtain the theorem.

3.2 The zero-one law


The above two proofs of the Recurrence Theorem are based on the fact that
if
P{S, = 0 at least for one n } = I
then
P{S, = 0 Lo.} = 1.
Similarly one can see that if P{S, = 0 at least for one n } were less than 1
then P{S, = 0 i.0.) would be equal to 0. Hence without any calculation one
can see that P{S, = 0 Lo.} is equal t o 0 or 1. Consequently in order to prove
the Recurrence Theorem it is enough to prove that P{S, = 0 i.o.} > 0.
In the study of the behaviour of the infinite sequences of independent
r.v.'s we frequently realize that the probabilities of certain events can be
only 0 or 1. Roughly speaking we have: let Y1, YZ, . . . be a sequence of
independent r.v.'s. Then, if A is an event depending on Y,,Y,+l,. . . (but
it is independent from Yll Yz,. . . , Y,-l) for every n, it follows that the
probability of A equals either 0 or 1. More formally speaking we have

ZERO-ONE LAW (Kolmogorov, 1933). Let Y1,Y2,. . . be independent


r.v. 's. Then if A C R is a set, measurable on the sample space of Y,,Y,+1,. ..
for every n, it follows that

P{A}=O or P{A}=l.

Example 1. Let Yl, Y2,.. . be independent r.v.'s. Then c:, Y , converges


a s . or diverges a.s.
Having the zero-one law we present a third proof of the Recurrence
Theorem. It is based on the following:
LEMMA 3.2 For any -m < a _< b < +cc we have
P{Iiminf S, = u } = P{limsupS, = b } = 0.
n+oo ,--too
26 CHAPTER 3

P r o o f is trivial.

P r o o f 3 of the Recurrence Theorem. Lemma 3.2, the zero-one law


and the fact that S, is symmetrically distributed clearly imply that

P(1iminf S, = -co} = P{limsupS, = co} = 1 (3.2)


n+m n+ce

which in turn implies the Recurrence Theorem.


Note that (3.2) is equivalent to the Recurrence Theorem. In fact the
Recurrence Theorem implies (3.2) without having used the zero-one law.
Chapter 4

From the Strong Law of Large Numbers


to the Law of Iterated Logarithm

4.1 Borel-Cantelli lemma and


Markov inequality
The proofs of almost all strong theorems are based on different forms of the
Borel-Cantelli lemma and those of the Markov inequality. Here we present
the most important versions.

BOREL-CANTELLI LEMMA 1Let A1 ,Az, . . . be a sequence of events


for which C,"=lP{A,} < 00. Then

P{limsupA,} = P = P{An i.0.) = 0,


n+m n = l i=n

i.e. with probability 1 only a finite number of the events A, occur simulta-
neously.
BOREL-CANTELLI LEMMA 2 Let A1 ,Az , . . . be a sequence of pair-
wise independent events for which C,"==,
P{A,} = 00. Then
P{limsupA,} = 1,
n+cc

i.e. with probability 1 an infinite number of the events A , occur simultane-


ously.
BOREL-CANTELLI LEMMA 2*(Spitzer, 1964). Let A l , Az, . . . be a
sequence of events for which
n n
00 xP{AkAi)
P{An} = oc, and liminf i=l
< c (C 2 1).
n+ 00
n= 1

27
28 CHAPTER 4

Then
p 2
~ { l i m s u A,} C-l.
n-iw

MARKOV I N E Q U A L I T Y Let X be a non-negative r.w. with EX < cm.


Then for any X > 0
1
P{X 2 XEX} 5 -.
x
As a simple consequence of the Markov inequality we obtain
CHEBYSHEV I N E Q U A L I T Y Let X be an r.u. with EX2 < 00. Then
>0
for any X
1
P{IX-EXI2 x ( E ( x - E x ) ~ ) ~ / ~ = 2 x ~ E ( x - E x ) ~5) -.
} P{(x-Ex)~
X2

Similarly we get

T H E O R E M 4.1 Let X be an r.w. with E(exp(tX)) < 00 for some t > 0.


Then for any X > 0 we have

Borel-Cantelli lemmas 1 and 2 and Markov inequality can be found


practically in any probability book (see e.g. R h y i , 1970/B).

4.2 The strong law of large numbers


THEOREM OF B O R E L (1909).

(4.1)

Remark 1. Applying this theorem for dyadic expansion, for almost all
x E [0,1] we obtain limn-ioo n-lN,(x) = 1/2. In fact the original theorem
of Borel was formulated in this form. Borel also observed that if instead of
the dyadic expansion we consider t-adic expansion (t = 2,3, . . .) of z E [0,1]
and N,(x, s, t ) (s = 0 , 1 , 2 , . . . , t-1, t = 2 , 3 , . . .) is the number of s’s among
the first n digits of the t-adic expansion of x , then

lim
,--so0 /N,(x’s7t)- i l = O
n t ( s = O , 1 , 2 ,...,t - 1 ; t = 2 , 3 ,...) (4.2)

for almost all z. Hence Borel introduced the following:


STRONG LAW AND LIL 29

Definition. A number z E [0,1] is normal if for any s = 0 , 1 , 2 , . . . t - 1;


t = 2 , 3 , . . . (4.2) holds.
The above result easily implies
THEOREM 4.2 (Borel, 1909).
Almost all z E [O, 11 are normal.
It is interesting t o note that in spite of the fact that almost all z E [O, 11
are normal it is hard to find any concrete normal number.

Proof 1 of (4.1). (Gap method). Clearly (cf. (2.4))


En-’S, = 0 , E n - 2 S i = n-l
Hence by Chebyshev inequality for any E >0
P{ln-’S,[ 2 E} 5 n-’&-’
and by Borel-Cantelli lemma 3
nV2Snz -+0 as. ( n -+ co).
Now we have t o estimate the value of Sk for the k’s lying in the gap, i.e.
+
between n2 and (n 1)’. If n2 5 k < (n 1)’ then +
Ik-’Skl = In-’S,zn’k-l +k- (s k - S,z)l 5 )72-’S,zI + k - ’ ( ( n + 1)’ - n 2 ) .
Since both members of the right-hand side tend to 0, the proof is complete.
Proof 2 of (4.1). (Method of high moments). A simple calculation gives

and again the Markov inequality and the Borel-Cantelli lemma imply the
theorem.
As we will see later on most of the proofs of the strong theorems are
based on a joint application of the above two methods.

4.3 Between the strong law of large numbers


and the law of iterated logarithm
The Theorem of Borel claims that the distance of the particle from the
origin after n steps is JS,J= o ( n ) a.s. It is natural to ask whether a better
rate can be obtained. In fact we have
30 CHAPTER 4

THEOREM OF HAUSDORFF (1913). For a n y E > 0

lim n--1/2--Es ,= 0 a s .
n+Oo

Proof. Let K be a positive integer. Then a simple calculation gives

ES:K = O ( n K ) .

Hence the Markov inequality implies

~{ls,l
2 TI'/^+') = P { s ~ ~
2 nK+EK)5~(n-'~).
If K is so big that E K > 1 then by the Borel-Cantelli lemma we obtain the
theorem. (The method of high moments was applied.)
Similarly one can prove

THEOREM 4.3

as.

Proof. By (2.4) we have

E exp(K1/2Sn)-+ (n -+ a).

Hence

+
P{S, 2 (I &)n1/2logn)
= P{exp(n-1/2Sn) 2 e x p ( ( l + E ) logn) = nl+') 5 n-1--E/2

if n is large enough. Consequently

and the statement of the theorem follows from the symmetry of S,.
The best possible rate was obtained by Khinchine. His result is the
so-called Law of Iterated Logarithm (LIL).
STRONG LAW AND LIL 31

4.4 The LIL of Khinchine


THE LIL OF KHINCHINE (1923).

lirn sup b,S, = lirn sup bnlSnl = lirn sup b,M,


n+m n+m n-xs
= lim sup b,M: = lim sup b,M; = 1 a.s.
n+oo n+cc

where 6, = (2nl0glogn)-'/~.
Proof. The proof will be presented in two steps. The first one gives an
upper bound of limsup,,, b,Mn, the second one gives a lower bound of
lim supn+, b,S,. These two results combined imply the Theorem.

Step 1. We prove that for any E >0

limsup bnMn 5 1+ E a s .
n+w

By (2.16) we obtain

P{Mn 2 (1+ E ) ~ L ' } 5 ex p (-(l+ E)loglogn) = (logn)-l-E (4.3)

if n is large enough. Let nk = [@'I (0 > 1). Then by the Borel-Cantelli


lemma we get
M , ~5 (1+ ~ ) b ; : a.s.
for all but finitely many k. Let nk 5 n < nk+1. Then
Mn 5 Mnk+, 5 (1 + &)bzj+l 5 (1+ 2 ~ ) b G ;5 (1+ 2 ~ ) b ; ' a.s.

provided that 0 is close enough to 1.


We obtain
lim sup b,T, 51
n+w

where T, is any of S,, ]S,I, M,, M k , M;.


Observe that in this proof the gap method was used. However, to obtain
inequality (4.3) it is not enough to evaluate the moments or the moment
generating function of M, (or that of S,) but we have to use the stronger
result of Theorem 2.13.

Step 2. Let nk = [ O k ](0 > 1). Then for any E >0


-l+E/2
P{bn,+, (snk+, - s n k ) 2 1- &> 2 0 ((log nk+l) )
32 CHAPTER 4

if k is large enough. Since the events {b,k+l(S,k+l


- S,,) > (1 - E ) } are
independent, we have by Borel-Cantelli lemma 2

bn,+l (Snk+l- S,,) > 1- E i.0. a.s.

Consequently
b n E + l ~ n , +>
l 1 - E - bm+, S
I, I.
Applying the result proved in Step 1 we obtain

Is, I = biiISnkIbnkbnk+l L (I + ~)b;ib,,+~ I E


hk+, a.s.

if k and 0 are large enough. Hence limsup,,, b,S, 2 1 - E a.s. for any
E > 0, which implies the Theorem.
Note that the above Theorem clearly implies

lim sup S, = co, lim inf S, = -co a.s.,


,
4 00 ,+a

which in turn implies the Recurrence Theorem of Section 3.1.


For later references we mention the following strong generalization of
the LIL of Khinchirie.

LIL OF HARTMAN-WINTNER (1941). Let Y1,Y2,.. . be a sequence


of i.i.d.r.u. ’s with
EYI = 0 , EY: = 1.
Then
lim sup b,(Y1
n+cc
+ Y2 + . . . + Y,) = 1 as.

Remark 1. Strassen (1966) also investigated the case EY; = 00. In fact
he proved that if Yl,Yz,. . . is a sequence of i.i.d.r.v.’s with EYI = 0 and
EY? = co then

lim sup b,lY1


n+cc
+ Y2 + . . . + Y,( = 00 a.s.

Later Berkes (1972) has shown that this result of Strassen is the strongest
possible one in the following sense: for any function f ( n )with lim f ( n )= 0
R,,
there exists a sequence Y1,Y2,. . . of i.i.d.r.v.’s for which EYI = 0, EYf = 03
and
+ +
lim b, f (n)IY1 Yz . . . + Y,l = 0 a.s.
,400
Chapter 5

Levy Classes

5.1 Definitions
The LIL of Khinchine tells us exactly (in certain sense) how far the particle
can be from the origin after n steps. A trivial reformulation of the LIL is
the following:
(i) for any E > 0

S, 5 (1+ &)b;' a.s. for all but finitely many n

and
(ii)
S, 2 (1 - c)b;' i.0. a.s.

Having in mind this form of the LIL, Levy asked how the class of those
functions (or monotone increasing functions) f(n) can be characterized for
which
S, < f ( n ) as.
for all but finitely many n. (i) tells us that (1 + E)b;' is such a function
for any E > 0 and (ii) claims that (1 - &)b;' is not such a function. The
LIL does not answer the question whether b;' is such a function or not.
However, one can prove that b;' is not such a function but (2n(loglogn +
3/2 log log log n))1/2belongs t o the mentioned class. In order to formulate
the answer of Lilvy's question introduce the following definitions.
Let ( Y ( t )t, 2 0) be a stochastic process then

Definition 1. The function a l ( t ) belongs t o the upper-upper class of


{ Y ( t ) >(a1 E UUC(Y(t))) if for almost all w E R there exists a to(w) > 0
such that Y ( t )< q ( t ) if t > t o = t o ( w ) .

Definition 2. The function az(t) belongs to the upper-lower class of


{ Y ( t ) }(a2 E ULC(Y(t))) if for almost all w E fl there exists a sequence of
positive numbers 0 < tl = t l ( w ) < t 2 = t z ( w ) < . . . with t , + 00 such that
Y(t2)2 @(ti)(i = 1 , 2 , . . .).

33
34 CHAPTER 5

Definition 3. The function ~ ( tbelongs ) to the lower-upper class of


{Y(t)} (a3 E LUC(Y(t))) if for almost all w E R there exists a sequence of
positive numbers 0 < tl = t l ( w ) < t 2 = t2(w) < . . . with t , + 00 such that
Y ( t i )5 a 3 ( t z ) (i = 1 , 2 , .. .).
Definition 4. The function a4(t) belongs t o the lower-lower class of { Y ( t ) }
(a4 E LLC(Y(t))) if for almost all w E R there exists a t o = to(w ) > 0 such
that Y ( t )> ~ ( tif )t > t o .

Let Y1 Y2, . . . be a sequence of random variables then the four Levy classes
UUC(Y,), ULC(Y,), LUC(Y,), LLC(Y,) of {Y,} can be defined in the
same way as it was done above for Y (t).
We introduce two further definitions strongly connected with the above
four definitions of the Levy classes.
Definition 5. The process Y ( t )is asymptotically deterministic (AD) if
there exist a function a l ( t ) E UUC(Y(t)) and a function a4(t) E LLC(Y(t))
such that limt+, la4(t)- al(t)l = 0.
Consequently

lim la4(t) - Y(t)I = lim lal(t) - Y(t)I = 0 a.s.


t+cc t+m

Definition 6. The process Y(t) is quasi AD (QAD) if there exist a


function al(t) E UUC(Y(t)) and a function ~ ( tE )LLC(Y(t)) such that
limsup,,, la4(t) - al(t)l < 00.

The definition of AD (resp. QAD) sequences of r.v.’s can be obtained


by a trivial reformulation of Definitions 5 (6).

Remark 1. Clearly UUC(Y,) (resp. UUC(Y(t))) is the complementer of


ULC(Y,) (resp. ULC(Y(t))) and similarly LUC(Y,) (resp. LUC(Y(t))) is
the complementer of LLC(Y,) (resp. LLC(Y(t))).

5.2 EFKP LIL


Now we formulate the celebrated Erdos (1942), Feller (1943, 1946), Kol-
mogorov-Petrowsky (1930-35) theorem.

EFKP LIL The nondecreasing function a(.) E UUC(Y,) if and only if


LEVY CLASSES 35

where Y, is any of n-1/2S,, n-1/2JS,J, n-l/'M n, n-'l2M+a , n-'/'M;.

This theorem completely characterises the UUC(Y,) if we take into con-


sideration only nondecreasing functions and it implies

Consequence 1. For any E >0


S, 5 (n(2 log log n + (3 + E ) log log log n))l/, as. (5.1)

for all but finitely many n. Further


(5.2)

Here we present the proof of Consequence 1 only instead of the proof of


EFKP LIL (cf. Remark 1 at the end of Section 5.3).
Proof of Consequence 1. The proof will be presented in two steps.

Step 1. We prove that for any E > 0 and for all but finitely many n
M , 5 (n(2loglogn + (3 + ~ ) l o g l o g l o g n ) ) l a/ ~s . , (5.3)

which clearly implies (5.1). By (2.16) we obtain

(5.4)

Let

Then by the Borel-Cantelli lemma we get

for all but finitely many k. Let nk 5 n < nk+1. Then

(5.5)

which implies (5.3).


36 CHAPTER 5

Step 2. Introduce the following notations

Then clearly

P { A , , } = O(k-'(logk)-'),
P { A L k }= O ( k - ' ( l ~ g k ) - ~ / ~ ) ,

and for any j <k =j + m by (2.15) we have

and
LEVY CLASSES 37

Observe that

and
JZ-1 1
if x>1,
m >-I - - &
with

m _- j
= exp
+
log(j m) l o g j log(j + m)
for any j large enough, we obtain
1I 2

if 1 5 m 5 (log 4) log j , and

if m > (log4) logj.


38 CHAPTER 5

Similarly

Hence
C 2 O(ml/') if 2loglogj 5 m 5 (log4) l o g j (5.6)
and
4
C 2 -4(nk) if (log4) l o g j 5 m 5 (logj) loglogj.
10
In case m > logj(log1ogj) we obtain
%=O(-) 1
(5.7)
k log k
and
j+logj(log logj)
P{A,,A,,} 5 0 (j-'(logj)-'loglogj) . (5.8)
k=j+l

Having (5.7) and (5.8) a simple calculation gives

and
N
C P { A , , } = O(log1ogN).
k=l

Hence the Borel-Cantelli lemma 2* of Section 4.1 and the zero-one law of
Section 3.2 imply the theorem. A simple consequence of EFKP LIL is

THEOREM 5.1 The nonincreasing function -c(n) E LLC(n-1/2S,) if


and only if I l ( c ) < co.

The Recurrence Theorem of Section 3.1 characterizes the monotone el-


ements of LLC(IS,I). In fact

THEOREM 5.2 A monotone function d(n) E LLC(IS,I) if and only if


d(n) < 0 for any n large enough.

Remark 1. For the role of Kolmogorov in the proof of EFKP LIL see
Bingham (1989).
LEVY CLASSES 39

5.3 The laws of Chung and Hirsch


The characterization of the lower classes of Adn and M’, is not trivial at
all. We present the following two results.

THEOREM OF CHUNG (1948). The nonincreasing function a(n) E


L L C ( ~ - ~ / ~ Mif, )and only if

I 2 ( a ) := c
M

n=l
n-l(a(n))-Z exp --(
$-2(n)) < 00.

THEOREM OF HIRSCH (1965). The nonincreasing function P(n) E


LLC(n-l12M$) if and only if
M

&(p) := Cn-’p(n)< 00.

Note that Theorem of Chung trivially implies

1/2
log log n
liminf
n+m ( 7 Js
n/r, =)
lr
as. (5.9)

(5.9) is called the “Other LIL”.

Remark 1. The proof of EFKP LIL is essentially the same as that of


Consequence 1. However, it requires a lemma saying that if a monotone
function f(.) E UUC(Sn) then f(n) 2 bi1/2 and i f f ( . ) E ULC(S,) then
f(n) 5 2b;l. The proofs of Theorems of Chung and Hirsch are also very
similar t o the above presented proof of Consequence 1 (Section 5.2). How-
ever, instead of (2.15) and (2.16) one should apply (2.17) and (2.18).

5.4 When will Sn be very large?


We say that S, is very large if S, 2 b;’. EFKP LIL of Section 5.2 says
that Sn is very large i.0. a.s. Define

a(n>= max(IC : o I IC _< n, s k 2 b;’}, (5.10)

i.e. a(.) is the last point before n where SI, is very large. The EFKP LIL
also implies that a(n) = n i.0. as. Here we ask: how small can a(n) be?
This question was studied by Erdos-Rkvksz (1989). The result is:
40 CHAPTER 5

THEOREM 5.3
(log log n)1/2
lim inf log-4 n ) = -c U.S.
,+a (log log log n )log n n

where C is a positive constant with

Equivalently
a(.) 2 n1-6n U.S.

for all but finitely m a n y n where


log log log n
6, = c
(log log n ) 1 / 2 .

The exact value of C is unknown.


Clearly one could say that S, is very large if

(i) S, 2 (1- ~ ) ( 2 n l o g l o g n ) ~ / ~(0< E < 1) or


(ii) S , >_ (2n(loglogn + 310gloglogn/2))~/~,
etc.
These definitions of “very large” are producing different a’s instead of
the one defined by (5.10). It is natural t o ask: what can be said about
these new a’s?
It is also interesting to investigate the time needed t o arrive from a very
large value to a very small one. Introduce the following notations: let

a1 = min{/c : IC 2 3, sk 2 b ~ l ) ,
= min{k : k > a l , s k 5 -bil},
a2 = min{IC : IC > P I , sk 2 b ~ l ) ,
p2 = min{IC : IC > a 2 , sk 5 - b k l } ,

Define a sequence of integers {nk} by

Then by Theorem 5.3 between nk and nk+l there exist integers j and 1 such
that
Sj >_ b j ’ and Sl 5 - b t l .
Hence we have
LEVY CLASSES 41

THEOREM 5.4
pk 5 nk as.
f o r all but finitely many k.
Very likely a lower estimate of pk is also close to nk but it is not proved.

Remark 1. The lim sup of the relative frequency of those i's (1 5 i 5 n)


for which Si 2 (1 - &)by1 is investigated in Remark 2 of Section 8.1.

5.5 A theorem of Cs&ki


The Theorem of Chung (Section 5 . 3 ) implies that with probability 1 there
are only finitely many n for which

M n = max(Mz, M i ) < (1 - E )

or in other words there are only finitely many n for which simultaneously

for any 0 < E < 1. At the same time Theorem of Hirsch (Section 5 . 3 )
implies that with probability 1 there are infinitely many n for which

In fact there are infinitely many n for which

M,'< -.
log n
Roughly speaking this means that if A42 is small, smaller than

(1- & ) ( log


8
7r2n
log n '
then M; is not very small, it is larger than

(1-&) ( 7r2n
8 log log n
)1/2

provided that n is large enough. Cshki (1978) investigated the question of


how big M i must be if M Z is very small. His result is
42 CHAPTER 5

THEOREM 5.5 Let a(.) > 0 , b(n) > 0 be nonincreasing functions.


Then
P{M,+ 5 a ( n ) n 1 I 2 and M; 5 b(n)n1I2i.0.1
1 if 1 4 ( 4 7 1 . ) , b ( n ) )= 00,
0 otherwise

and c ( n ) = a(.) + b(n).


The special case a ( n ) = b(n) of this theorem also gives Chung’s theorem.
Formally this theorem does not contain Hirsch’s theorem.
In order t o illustrate what Csaki’s theorem is all about we present here
two examples.

Example 1. Put

a(.) = C(loglogn)-1/2 (0 < c < r / h )


and
b(n) = D(loglogn)-1/2 (D> 0 ) .
Then
I 4 ( a ( n ) ,b ( n ) ) < rm if D < T / & - C
and
14(a(n),b(n))= co if D 2 r / h - C.
Applying Csaki’s theorem, this fact implies that the events

{ M; < C (A)
log log n
‘ I 2 and M; < D (
log log n
z
‘I2}

occur infinitely often with probability 1 if D 2 T / & - C. However, it is


not so if D < - C. That is to say if n is large enough and

M,+ < c (-)lI2 n (0 < c < T/&),


log log n
then it follows that

M i 2D (-)log log
n
n ‘ I 2 for any 0 < D < r / h - C.
L E V Y CLASSES 43

Example 2. Put
a ( n ) = (1ogn)-" (0 < Q < 1)
and
b ( n ) = E(loglogn)-1/2 ( E > 0)
Then

< 00
14(a(n),b(n)) if 0 < CY, < 1 and E < 7r(2(1-
and

I4(a(n),b(n))= 00 if 0 < cy <1 and E 2 n(2(1 - a))-'/'.

Observe also that


14((1ogn)-l, F(logIoglogn)-1/2) < 00 if F < r/Jz
and
14((logn)-1,~(logloglogn)-1/2)= 00 if F 2 r/Jz.
Applying Cshki's theorem this fact implies that the events

{M: < n1/2(logn)-a and M; < En1/2(10glogn)-1/2}


resp.

{M,' < n'/'(logn)-l and M i < Fn1~2(logloglogn)-'/2}


occur infinitely often with probability one if E 2 7r(2(1 - (resp.
F 2 ../a).However, it is not so if E < 7 ~ ( 2 ( 1 - a ) ) - ~ (resp.
/~ F < ~/f
As we have mentioned already, CsBki's theorem states that if one of the
r.v.'s M L and M; is very small then the other cannot be very small. It
is interesting t o ask what happens if one of the r.v.'s M Z and M; is very
large. In Section 8.1 we are going to prove Strassen's Theorem 1, which
easily implies that for any E > 0 the events
1-&
M: 2 -(2nl0glogn)~/' and M; 2 -
1 - &(2n log log n)1/2}
3 3
occur infinitely often with probability 1, but of the events

{ M,' 2 *(2n
3
10glogn)l/~ and M i 2 *(2n
3
1 0 g lo g n ) l/~

only finitely many occur with probability 1. In general one can say
44 CHAPTER 5

THEOREM 5.6 For any c > 0 and 1/3 5 q < 1 the events

occur infinitely often with probability 1, but of the events

only finitely many occur with probability 1.

As a trivial consequence of Theorems 5.5 and 5.6 we obtain

THEOREM 5.7 Consider the range MG = M$+M; of the random walk


{ S n } . Then f o r any E > 0 we have

+
(1 ~ ) ( 2 n l o g l o g n )E~ UUC(M;),
/~
(1- ~ ) ( 2 n l o g l o g n )E~ ULC(M:),
/~

2 loglogn

(1 - c ) (f -) n
2 loglogn
ll2 E LLc(M;).

Theorems 5.5 and 5.6 describe the joint behaviour of M$ and M;. We
also ask what can be said about the joint behaviour of S, and M; (say).
In order to formulate the answer of this question we introduce the following
notations.
Let ~ ( n 6(n)
) , be sequences of positive numbers satisfying the following
conditions:
~ ( nmonotone,
)
6(n)-1 0,
n1/2Y(n) t co,
n1/26(n) 00.t
Further let f ( n )= r ~ ' / ~ $ (EnULC(S,)
) with $(n)t co. Define the infinite
random set of integers

Then we have
LEVY CLASSES 45

THEOREM 5.8 (Cs6ki-Grill, 1988). For any f (n) =n1l2$(n) EULC(S,)


the function

g(n) = n1/2y(n) E UUC(M;,n E C)


if and only if
f ( n )+ 2g(n) E UUC(Sn).
Further,
n1’26(n) E L L C ( M i , n E 5)

Remark 1. n1/2y(n) E UUC(M;,n E <) means that n1/2y(n) _> M;


a.s. for all but finitely many such n for which n E (. In other words,
the inequalities Sn 2 f(n) and M; 2 n’l2y(n) simulteneously hold with
probability I only for finitely many n.

Consequence 1. Let V ( n )= min(Mz,M;). Then f(n)E UUC(V) if


and only if 3 f ( n )E UUC(S,).

Remark 2. Theorem 5.6 in case q = 1 / 3 follows from the above Con-


sequence 1. For other q’s (1/3 < q < 1) Theorem 5.8 implies Theorem
5.6.

~ <E
Example 3. Let f ( n )= ( ( 2 - ~ ) n l o g l o g n ) ~ / (0 < 2). Then we find
the inequalities

hold with probability 1 only for finitely many n. However,

The above two statements also follow from Strassen’s theorem 1 (cf. Section
8.1). Further,

S, 2 (1 - ;)1’2b;1 and M; 5 n1/2(logn)-q/2 i.0. a.s. (5.11)

if and only if Q 5 E.
This page intentionally left blank
Chapter 6

Wiener Process and Invariance Principle

6.1 Four lemmas


+
Clearly the r.v. a - l ( S ( k a ) - S ( k ) ) can be considered as the average
speed of the particle in the interval (k,k + a ) . Similarly the r.v.

is the largest average speed of the particle in (0, n ) over the intervals of size
a. We know (Theorem 2.9) that a-l/’(S(k + a ) - S ( k ) ) is asymptotically
(a + co) an N ( 0 , l ) r.v. Hence S ( k + a ) - S ( k ) behaves like a1/2 or by the
LIL of Khinchine

lim sup
+
S(k a) - S(k)
= 1 as.
a-+w (2alogloga)1/2
for any fixed k . We prove that even I l ( n , a ) cannot be much bigger than
a1/2. In fact we have
LEMMA 6.1 Let a = a, 5 na (0 < a < 1). Then

if c > 2.
Proof. By Theorem 2.10 for any k

S(k +a) - S(k) 2


na/2(log n ) 1 / 2
c }M 1 - @(C(logn)1’2)

5 exp (- 2
C2 log n ) = n-c2/2

as n -+ 00. Hence

47
48 CHAPTER 6

and the Borel-Cantelli lemma implies the statement.

Remark 1. Much stronger results than that of Lemma 6.1 can be found
in Section 7.3.

LEMMA 6.2 Let {Xij; i = 1 , 2 , . . . ; j = 1 , 2 , . . .} be a double array of


i.i.d.r.v. 's with

EXij = 0, EX; = 1, E(exp(tXij)) < 00

for all t in some interval I t I< t o . Then for any K >0 there exists a
positive constant C = C ( K ) such that

for all but finitely many i .

Proof of Lemma 6.2 is essentially the same as that of Lemma 6.1 using
Theorem 2.11 instead of Theorem 2.10.

LEMMA 6.3 Let XI, X2, . .. be i.i.d.r.v. 's with


1
P{Xi = 1) = P{Xi = -1) = -
2
and let
u = min{n : 1x1+ X2 + . . . + X,I = 2).
Then
P{Y = 2k) = 2-k, (k = 1 , 2 , . . .).
Consequently

Eu = 4,
Var u = 8,
1
if t < -1 log2, (6.1)
2
1
Eexp(-t(u - 4)) = ~ if t > 0. (6.2)
2e2t - I

Proof is trivial.
W I E N E R PROCESS A N D I N V A R I A N C E PRINCIPLE 49

LEMMA 6.4 Let v1,v2,. . . be i.i.d.r.v.'s with


P{Y = 25) = 2-k7 (k = 1 , 2 , . . .).
Then

(6.3)

(6.4)

Proof. (6.3) follows from (6.1), (6.2) and the Markov inequality. (6.4) is a
trivial consequence of (6.3).

6.2 Joining of independent random walks


Let { X ( i , j , k), i , j , k = 1 , 2 , . . .} be an array of i.i.d.r.v.'s with
1
P{X,,I, = 1) = P{X,,, = -1} = -
2
and let

k=l
S ( i , j ; O )= 0,
v ( i , j )= min{n : IS(i,j;n)l= 2},
k

j=1
T(1;n) = S ( 1 , l ; n ) .
Note that { T ( l ; n ) , n = 0 , 1 , 2 , . . .} is a random walk. Now we define
the sequence (T(2; n ) , n = 0 , 1 , 2 , . . .} as follows:
1
T(2; n) = -S(2,1; n) signS(1,l; n) signS(2,l; ~ ( 2 ~ 1 )if) 0 5 n 5~(2~
2
Note that T ( 2 ;v ( 2 , l ) ) = S ( 1 , l ; 1).
Now we give the definition of T ( 2 ; n ) when p ( 2 , l ) = v ( 2 , l ) 5 n 5
p ( 2 , 2 ) . Let
1
T(2; n ) = T(2; Y(2,l)) + -S(2,2;
2
n - Y(2,1))R(2,1)
50 CHAPTER 6

where

and
4211) = p(2,1) 5
5 P(2,2).
Similarly for p ( 2 , k ) 5 n 5 p(2, k + 1) ( k = 2 , 3 , . . .) let

T(2;n ) = T(2;p(2,k ) ) + -21S ( 2 , k ; n - p ( 2 , k ) ) R ( 2 ,k )


where

R(2, k ) = sign(S(1,l;k + 1) - S(1,l;k))signS(2, k + 1;v ( 2 ,k + 1))


On the properties of T ( 2 ;n) note that

(i) 2T(2;n) is a random walk,

(ii) T(2;p(2, k ) ) = S(1,l;k ) = T(1;k ) ( k = 1,2,.. .).

Continuing this procedure we define T ( i ;n) as follows:

T ( i ;n) = T ( i ;p ( i , k ) ) + p 1S ( i ,k ; n - p ( i ,k ) ) R ( ik),
if
Pcl(i,k) 5 12 5 4 4k + 1)
where

R(i,k ) = sign(T(i - 1;k + I) - T ( i- 1;k))signS(i,k + 1;p ( i , k + 1)).


On the properties of T ( . ;.) note that

n ) ( n = 1,2,.. .) is a random walk for any i fixed,


(i) 2Zw1T(i;

(ii) T ( i ;p ( i - 1,k ) ) = T ( i - 1;k ) .

On the properties of p ( . , .) by (6.4) we have

(6.5)

Hence
m a x , ( p ( i ,k ) - 4kl 5 2k1/24Ei a.s.
15k14'
W I E N E R PROCESS A N D I N V A R l A N C E PRINCIPLE 51

for all but finitely many i and


22-'T(i; p ( i - 1,k ) ) = 2i-1T(i;4k + 8(i;k ) ) ( k = 1 , 2 , . . . , 4i)
where
)29(i; k)J _< 2k1/24Ei.
By Lemma 6.1
[2Z-'T(i; p ( i - 1,k ) ) - 2i-'T(i; 4k)l 5 (21c1/'44E')'/2C(log(2k1/"4Ei))1/2
<
-
4(1/4+2E)i.

Consequently
T ( i - 1;k) = T ( i ;4k) + 2-(1/2-2E)i. (6.6)

6.3 Definition of the Wiener process


The random walk is not a very realistic model of the Brownian motion. In
fact the assumption that the particle goes at least one unit in a direction
before turning back, is hardly satisfied by the real Brownian motion. In
a more realistic model of the Brownian motion the particle makes instan-
taneous steps to the right or to the left, that is a continuous time scale is
used instead of a discrete one.
In order to build up such a model, assume that in a first experiment
we can only observe the particle when it is located in integer points and
further experiments describe the path of the particle between integers. Let
{ S ( n )= S(O)(n), 72 = 0 , 1 , 2 , . . .}
be the random walk which describes the location of the particle when it
hits integer points.
In a next experiment we observe the particle when it hits the points
k/2 ( k = O , f l , f 2 , . . .). Let SC{ be the obtained process, i.e. S(n)
(1) is a

random walk with the properties:


(i) the particle moves 1/2 to the right or to the left with probability 1/2,

(ii) hits the integers in the same order as Si:; does.

Note that (T(1;k ) , T(2;k ) ) has the properties requiredfrom (S(O)(k), S(l)(k)).
Let
x ( t )= T ( i ;[4%]). (0 5 t 5 1).
Then (6.6) easily implies that {&(t)0 5 t 5 1) converges a.s. uniformly to
a continuous process W ( t ) .This limit process is called a Wiener process.
It is easy t o see that this limit process has the following three properties:
52 CHAPTER 6

(i) W ( t )- W ( s )E N ( 0 ,t - s) for all 0 5 s < t < 00 and W ( 0 )= 0,

(ii) W ( t )is an independent increment process that is W(t2)- W(t1),


W ( t 4 )- W ( t 3 ) ., . . , W(t2i) - W(t2i-1) are independent r.v.’s for all
0 5 tl < t2 5 t 3 < t 4 5 . . . 5 t2i-1 < t2i (i = 2 , 3 , . . .),

(iii) the sample path function W ( t , u )is continuous a.s.

(i) and (ii) are simple consequences of the central limit theorem (Theorem
2.9). (iii) was proved above.

Remark 1. The above construction is closely related to the one of Knight


(1981).

6.4 Invariance Principle


Define the sequence of r.v.’s 0 < 71 < 7 2 < . . . as follows:
71 = inf{t : t > 0, IW(t)I = I},
72 = inf{t : t > 71, IW(t)- W(T1)( = l},
... ...
l inf{t
~ i += : t > ri, IW(t)- W(7i)l= l},. .

Observe that

(i) W ( T I )W(72)
, - W ( T ~ W(73)
), - W(72), . . . is a sequence of i.i.d.r.v.’s
with distribution P{W(TI) = l} = P(W(71) = -1) = 1 / 2 , i.e.
{W(T,)}is a random walk,

(ii) 7 1 ,-
~71,73 - 72,.. . is a sequence of i.i.d.r.v.’s with distribution
pi71 > .I = p{suP,<z IW(t)l < 1).
Applying the reflection principle (formulated for S, in Section 2.1, Proof 2
of (2.5)) for W ( . )we obtain

Evaluating the moments of 71 and applying the strong law of large numbers
we obtain
E71 = 1, ET: = 2, lim n - l ~ ,= 1 a.s.
n+03
W I E N E R PROCESS A N D I N V A R I A N C E PRINCIPLE 53

The above two observations are special cases of a theorem of Skorohod


(1961). Because of (i) we say that a random walk can be embedded t o a
Wiener process (by the Skorohod embedding scheme).
Applying the LIL of Hartman-Wintner (1941) (Section 4.4) and some
elementary properties of the random walk (formulated in Section 6.1) we

This result can be formulated as follows:


THEOREM 6.1 On a rich enough probability space {R,.F,P} one can
define a Wiener process { W ( t ) , t2 0) and a random walk {S,, n = 0 , 1 , 2 , . . .}

This result is a special case of a theorem of Strassen (1964).


A much stronger result was obtained by Koml6s-Major-Tusnady (1975-
76). A special case of their theorem runs as follows:
INVARIANCE PRINCIPLE 1 On a rich enough probability space one
can define a Wiener process { W ( t ) , t 2 0) and a random walk { S n , n =
O,1,2,. . .} such that
IS, - W(n)l= O(l0gn) a.s.
Remark 1. A theorem of Bartfai (1966) and Erdos-Renyi (1970) im-
plies that the Invariance Principle 1 gives the best possible rate. In fact if
{S,, n = 0 , 1 , 2 , . . .} and { W ( t ) t, 2 0 } are living on the same probability
space {R, .TIP} then 1 S, - W ( n )12 O(1og n) as. except if S, is the sum
of i.i.d. N ( 0 , l ) r.v.’s.
As a trivial consequence of Invariance Principle 1 we obtain
THEOREM 6.2 Any of the EKFP LIL, the Theorems of Chung and
Hirsch and Theorems 5.3, 5.4 and 5.5 remain valid replacing the random
walk S, b y a Wiener process W ( t ) . A s an example we mention: Let Y ( t )
be any of the processes
t-1/2 w(t),
t - l q W ( t )I]

t-1/2 sup IW(s)(,


o<s<t

t-ll2 sup W ( s ) ,
o<s<t
54 CHAPTER 6

Then a nondecreesing function u(t) E UUC(Y(t))if and only if

lm(-T) t exp dt < 03. (6.8)

Similarly a nonincreasing function ( a ( t ) ) - l E LLC(tP1I2m(t))


if and only
if
2 ( - g ( a ( t ) ) 2 )dt < 03.
1 ° 0 t - 1 ( a ( t ) )exp (6.9)

Remark 2. In fact the EFKP LIL only implies that u ( n ) E UUC(Y(n)) ( n =


1 , 2 , . . .) if u ( - ) is nondecreasing and (6.8) is satisfied. In order to get our
Theorem 6.2 completely we have to know something about the continuity of
W ( . ) ,i.e. we have to see that the fluctuation supkcLsupklslk+l IW(s)-
W ( k ) (cannot be very big. For example, the complete result can be ob-
tained by Theorem 7.13, especially Example 2 of Section 7.2, which says
that the above fluctuation is asymptotically (2 logt)lI2 a.s.
Theorem 6.2 claimed that any of the strong theorems formulated up to
now for S, will be valid for W ( . ) .The same is true for the limit distribution
theorems of Section 2.2. In fact we have

THEOREM 6.3

P(t-l/2m+(t) > u} = 2(1 - qu)) ( t > 0 , u > O),

For later references we give a more general form of the above Invariance
Principle.

INVARIANCE PRINCIPLE 2 (Koml6s-Major-TusnAdy, 1975-76).


Let F ( x ) be a distribution function with

J, 2 2 d F ( z ) = 1,
00 00

J, z d ~ ( z=) 0,
W I E N E R PROCESS A N D I N V A R I A N C E PRINCIPLE 55

with some t o > 0. Then, on a rich enough probability space {fl,F,P},one


can define a Wiener process { W ( t ) ,t 2 0 } and a sequence of i.i.d.r.v.3
Y1,Y2,. . . with P{Y1 < x} = F ( x ) such that

IT, - W(n)I = O(l0gn) as.,

where T, = Yl + Y2+ . . . + Y,.


This page intentionally left blank
Chapter 7

Increments

7.1 Long head-runs


In connection with a teaching experiment in mathematics, T. Varga posed
a problem. The experiment goes like this: his class of secondary school
children is divided into two sections.
In one of the sections each child is given a coin which he then throws two
hundred times, recording the resulting head and tail sequence on a piece of
paper. In the other section the children do not receive coins but are told
instead that they should try to write down a “random” head and tail se-
quence of length two hundred. Collecting these slips of paper, he then tries
t o subdivide them into their original groups. Most of the time he succeeds
quite well. His secret is that he had observed that in a randomly produced
sequence of length two hundred, there are, say, head-runs of length seven.
On the other hand, he had also observed that most of those children who
were to write down an imaginary random sequence are usually afraid of
putting down head-runs of longer than four. Hence, in order t o find the
slips coming from the coin tossing group, he simply selects the ones which
contain head-runs longer than five.
This experiment led T . Varga to ask: What is the length of the longest
run of pure heads in n Bernoulli trials?
A trivial answer of this question is

THEOREM 7.1
ZN
lim - = 1 a s .
N+cc lg N
where ZN is the length of longest head-run tall N .

Proof.

Step 1. We prove that

ZN
lim inf -> 1 as. (7.1)
N-tm 1gN -

57
58 CHAPTER 7

Let E < 1 be any positive number and introduce the notations:

t = [(l- & ) l g N ] ,
N= I;[ -1,

Clearly Uo, U 1 , . . . , Uw are i.i.d.r.v's with


1
P{Uk = t } = -.
2t
Hence
P{Uo < t , u1 < t , . . . , u, < t } =
and a simple calculation gives

2
N=l
(l-;)m<OO

for any E > 0. Now the Borel-Cantelli lemma implies (7.1).

Step 2. We prove that

(7.2)

Let E be any positive number and introduce the following notations:

u=[(l+E)lgN],
vk =SkfU -s k ( k = 0, 1 , . . , N - u ) ,

AN = u
N-u

k=O
{vk u}

and let T be any positive integer for which T E> 1. Then

consequently
INCREMENTS 59

Hence the Borel-Cantelli lemma implies

lim sup -'kT < 1 a.s. (7.3)


k+m IgkT -

Let kT < n < ( k + l)Tand observe that by (7.3)

2, 5 Z ( k + l ) T <_ (1 + &) lg(k + <_ (1 + a&) lgkT <_ (1 f 2E) Ign

with probability 1 for all but finitely many n. Hence we have (7.2) as well
as Theorem 7.1.
A much stronger statement is the following:
THEOREM 7.2 (ErdBs-Rkvksz, 1976). Let {a,} be a sequence of positive
numbers and let
00

A({a,}) = 2-an.

Then

(7.4)
(7.5)

and for any E >0


i C , = [ I g n - I g I g I g n + I g I g e - 1 + ~ ] ELUC(Z,), (7.6)
+
A, = [Ign - IgIgIgn IgIge - 2 - E ] E LLC(Z,). (7.7)

Example 1. If 6 > 0 and


a: = l g n + (I + 6) Iglgn then A({a:}) < co.
Hence (7.4) and (7.7) together imply that

P{A, 5 2, 5 a; for all but finitely many n } = 1.


Note that if n = 2220 = - and E = S = 0 , l then A, =
1048569 and a; = 1048598.

Remark 1. Clearly (7.4) and (7.5) are the best possible results while (7.6)
and (7.7) are nearly the best possible ones.

A complete characterization of the lower classes was obtained by Guibas-


Odlyzko (1980) and Samarova (1981). Their result is:
60 CHAPTER 7

THEOREM 7.3 Let

$, = l g n - l g l g l g n + l g l g e - 2 .

Then
liminf[Z, - +,I =O as.
n+cc

It is also interesting to ask what is the length of the longest run con-
taining at most one (or at most TI T = 1 , 2 , . . .) (-1)’s. Let Z,(T) be the
largest integer for which

11(n,Z,(T)) 2 Z n ( T ) - 2T
where
11(n,a ) = o s y g p + a- Sk).
A generalization of Theorem 7.2 is the following:
THEOREM 7.4 Let {a,} be a sequence of positive numbers and let

(7.8)
(7.9)

(7.10)

(7.11)

A very nice question connected to Theorem 7.2 was proposed by Ben-


jamini, Haggstrom, Peres and Steif (2003). They considered the following
model: let {$):;I j = 0 , 1 , 2 , . . . ; n = 1 , 2 , . . .} be a sequence of indepen-
dent Poisson processes with rate 1, also independent from the sequence
X 1 , X 2 ,.... Let {X,(d; J = 1 , 2 , . . .; n = 1 , 2 , . . .} be an array of i.i.d.r.v.’s

with
p{xp = 1) = p{xp = -1) = 1/2
INCREMENTS 61

and consider the processes


~ , ( t )= x?) for $(j-l)
1 2 < t < $2).
-

Clearly for any t fixed the sequence {X,(t) n = 1 , 2 , . . .} is a random walk.


Hence any theorem formulated for random walks remains valid for X,(t)
(for any fixed t ) . However, it can happen that for some t we get some new
phenomenon.
For example for any t > 0, Z,(t) (the longest head-run of X,(t) up to
n ) obeys Theorem 7.2 i.e.
a , E UUC(Z,(t)) if A ( { a i } ) < 00
and
a, E ULC(Z,(t)) if A ( { u T Z =
} ) co.
Benjamini et al, proved that for some (random) t it is not the case. For
example
P{3t 2 0 such that {Z,(t) 2 a, i.0. }}

n=l

Hence for fixed t the length of the longest run containing at most one
tail only, might be as big as the length of the longest pure head-run for a
randomly choosed t. This remark and Theorem 7.4 combined suggest the
following
Conjecture.
P(3t 2 0 such that {Z,(T, t ) 2 a, i.0. }}
00

n= 1

A trivial reformulation of the question of T. Varga is: how many flips are
needed in order to get a run of heads of size m. Formally speaking let Zm
be the smallest integer for which
-
Xi,-nZ+l - XZ,-m+z -
- ... =x- = 1.
Zn
As a trivial consequence of Theorem 7.2 we obtain
62 CHAPTER 7

THEOREM 7.5

;
i, E UUC(Z,),
Km E ULC(Zm),
6 , E LUC(2,) if A ( { a m } )= 00,
6 , E LLC(Zm) if A ( { a m } ) <

where K , (resp. 1 ), are the inverse functions of Ic, of (7.6) (resp. Am


of (7.7)) and a , is the inverse function of the positive increasing function
I

am.
Instead of considering the pure head-runs of size m one can consider any
given run of size m and investigate the waiting time till that given run
would occur. This question was studied by Guibas-Odlyzko (1980).
Erdos asked about the waiting time V, till all of the possible 2, patterns
of size m would occur at least once. An answer of this question was obtained
by Erdos and Chen (1988). They proved
THEOREM 7.6 For any E > 0
+
(1 ~ )2 ,m
E UUC(V,)
k e
and
(1 - €)2,rn
E LLC(V,).
k e

We mention that the proof of Theorem 7.7 is based on the following limit
distribution:
THEOREM 7.8 (Mbri, 1989)
INCREMENTS 63

In order t o compare Theorem 7.7 and Theorems 7.2 and 7.3 it is worthwhile
to consider the inverse of v k . Let

U, = max{k : vk 5 n}.
Then Theorem 7.7 implies

Corollary 1. (Mbri, 1989). For any E > 0 we have

for all but finitely many n. Consequently U, is QAD.

Observe that U, is “less random” than 2,. In fact for some n’s the
lower and upper estimates of U, are equal to each other and for the other
n’s they differ by 1. Clearly U, 5 2, but comparing Theorems 7.2, 7.3
and Corollary 1 it turns out that U, is not much smaller than 2,.
In Theorem 7.2 we have seen that for all n , big enough, there exists a
block of size A, (of (7.7)) containing only heads but it is not true with K,
(of (7.6)). Now we ask what is the number of disjoint blocks of size A,
containing only heads.
Let vn(k) be the number of disjoint blocks of size k (in the interval [0,n])
containing only heads, that is to say vn(k) = j if there exists a sequence
+ +
0 I tl < t l k 5 t 2 < t2 f k 5 . . . 5 t j < t j k 5 n such that

Sti+k-Sti = k (i=1,2,...,j)

but

Sm+k - S, <k if ti + k 5 m < ti+l (i = 1 , 2 , . . . , j - 1)

or t j + k < m < n - k .
The proof of the following theorem is very simple.
THEOREM 7.9 (Rkvesz, 1978). For any E > 0 there exist constants
0 < a1 = Q1(&) 5 a2 = CYZ(&) < 00 such that

(for A, see (7.7)).


This theorem says that in the interval [0,n] there are O(lg lg n) blocks of
size A, containing only heads. This fact is quite surprising knowing that it
64 CHAPTER 7

happens for infinitely many n that there is not any block of size A,+2 2 Ic,
containing only heads.
Deheuvels (1985) worked out a method to find some estimates of a1 ( E )
and a,(&).In order to formulate his results let 2, = 2:’ and let Zp’ 2
Zi3’ 2 . . . be the length of the second, third, . . . longest run of 1’s observed
in X I ,X Z ,. . . ,X,. Then
THEOREM 7.10 (Deheuvels, 1985). For any integer r 23 and k 2 1
and for any E > 0
1
lgn+ ( I ~ ~ ~ + ~ ~ ~ + l g , _ ~ ~ E+UUC(ZF’),
( l + ~ ) I ~ (7.12)
, ~ )
1
lgn+ ( l g z n + . . . + l g , _ , n + l g , n ) E ULC(Zik)), (7.13)
[lgn - lg, n + l g l g e - 11 E LUC(Zik)), (7.14)
[lgn - lg, n + lglge - 2 - E ] E LLC(Zik)). (7.15)

Remark 2. In case k = 1 (7.14) gives a stronger result than (7.6) but


(7.14) and (7.15) together is not as strong as Theorem 7.3.
T H E O R E M 7.11 (Deheuvels, 1985). Let 21 E (0, f c o ) be given, and let
0 < c; < 1 < c i < co be solutions of the equation
1
c-1-logc= -.
U
(7.16)

Then f o r any E > 0 we have


[lgn - Ig, n + lg, e - lgc: - 1 + EI E U U C ( Z ; ’ O ~ ~ ~ ] ) , (7.17)
+
[Ign - lg, n Ig, e - 1gc: - 2 - & I E U L C ( Z ! ’ ~ ~ ~ ~) 7 ] (7.18)
[~gn-~g,n+~g,e-~gc:,E +~L ]U C ( Z ; ’ ~ ~ Z ~ ] ) , (7.19)
+
[lgn - lg, n lg, e - lgc; - 2 - E ] E L L C ( Z ~ ~ O ~ Z ~ ] )(7.20).

Remark 3. This result is a modified version of the original form of the


theorem. It is also due to Deheuvels (oral communication).
Theorem 7.2 also implies that

liminf vn(Zn) = 0 a.s.


n+m

if 1, 2 A, but
INCREMENTS 65

Now we are interested in lim v,( [lg n + lg lg n ] )and formulate our


simple
THEOREM 7.12 (RkvBsz, 1978).

limsupv,([lgn f l g l g n ] ) 52 as. (7.21)


n+cc

Finally we mention a few unsolved problems (Erdos-Revksz, 1987).

Problem 1. We ask about the properties of 2, - 2


:) = 2:) - 2i2).It is
clear that P(2:' ) i.0. } = 1. The limsup properties of 2:) - 2i2'
=2
:
look harder.
Problem 2. Let K, be the largest integer for which

Characterize the limit properties of K,. Observe that Theorem 7.9 suggests
Kn
0 < limsup ___ < 03.
n-+m loglogn

Problem 3. Let 2: be the length of the longest tail run, i.e. 2; is the
largest integer for which

I*(n,.q= -2;

where
I*(n,lc)= min (Sj+k - Sj).
O5jsn-k

How can we characterize the limit properties of 1


2,- Z;l? Note that by

and clearly
P(2, = Z: i.0.) = 1.

Problem 4. Let
0 if 2, <_ Z;,
un={ 1 if 2, > 2;
and
66 CHAPTER 7

i.e. U, = 1 if the longest head run up to n is longer than the longest


tail run. We ask: does limn+mCn exist with probability l? In the case
when limn+m L, = C , a.s. then C is called the logarithmic density of the
sequence { U,}.

Problem 5 . (Karlin-Ost, 1988). Consider two independent coin tossing


sequences X I ,Xa, . . . , X , and X i , X i , . . . , XA. Let Y, be the longest com-
mon "word" in these sequences, i.e. Y, is the largest integer for which there
+
exist a 1 5 k , < k , Y, 5 n and a 1 5 &l < ICA Y, 5 n such that +
Xk,+j = Xk;+j if j = 1 , 2 , . . . ,Yn.

Karlin and Ost (1988) evaluated the limit distribution of Y,. Its strong
behaviour is unknown. Petrov (1965) and Nemetz and Kusolitsch (1982)
investigated the length of the longest common word located in the same
place, i.e. they defined Y, assuming that k , = k k . In this case they proved
a strong law for Y,.

7.2 The increments of a Wiener process


This paragraph is devoted to studying the limit properties of the processes
J i ( t , a t ) (i = 1,2,3,4,5) where at is a regular enough function (cf. Nota-
tions t o the increments).
Note that the r.v. a - l ( W ( s + a ) - W ( s ) )can be considered as the
+
average speed of the particle in the interval (s, s a). Similarly the r.v.

(W(s+ a ) - W ( s ) )
-
a l J I ( t , U ) = a-l sup
Ossst-a

is the largest average speed of the particle in (0, t ) over the intervals of size
a. The processes J i ( t , a) (i = 2 , 3 , 4 , 5 , t 2 a ) have similar meanings.
Note also that

J l ( 4 a t ) I min{J2(t, a t ) , J3(tr a t ) } ,
max{J2(t, a t ) ,J 3 ( t , a t ) } I J4(C a t ) .
To start with we present our
THEOREM 7.13 (Csorg6-Rh%z, 1979/A). Let at ( t 2 0 ) be a nonde-
creasing function o f t for which
(i) 0 < at 5 t ,
(ii) t / a t is nondecreasing.
INCREMENTS 67

Then for any i = 1 , 2 , 3 , 4 we have

IimsupytJi(t,at) = limsupyt(W(t
t+m t-+m
+ at) - W(t)(
= limsupyt(W(t + a t ) - W ( t ) )= 1 a.s.
t+m

where
"It = "I(t,at) = 2at log - + loglogt
( ( :t
)) -1/2.

If we also have
(iii)

then
lim rtJi(t,at) = 1 a s .
t-52

In order to see the meaning of this theorem we present a few examples.

Example 1. For any c > 0 we have

lim a.s. (i = 1,2,3,4). (7.22)


t+m clogt

This statement is also a consequence of the Erd6s-Renyi (1970) law of large


numbers.

Example 2.
Ji(t, 1)
lim = 1 as. (i = 1,2,3,4). (7.23)
t--tm (2 log t)1/2

Example 3. For any 0 < c _< 1


Ji (t , ct )
lim sup = 1 as. (i = 1,2,3,4). (7.24)
t+- (2ct log log t ) 1 / 2

In case c = 1we obtain the LIL for Wiener process (cf. Theorem 6.2). Note
that (7.24) is also a consequence of Strassen's theorem of Section 8.1.
Having Theorem 7.13 it looks an interesting question to describe the
Levy-classes of the processes Ji(t,ut) (i = 1,2,3,4) in case of different
at's. Unfortunately we do not have a complete description of the required
Levy-classes. We can only present the following results:
68 CHAPTER 7

THEOREM 7.14 (Ortega-Wschebor, 1984). Let f ( t ) be a continuous


nondecreasing function and assume that at satisfies conditions (i) and (ii)
of Theorem 7.13. Then

f ( t ) E UUC (u;'I2Ji(t, a t ) ) (i = 1,2,3,4)

(7.25)

Further, if
lm(-9) $exp dt = co (7.26)

then
f ( t )E uuc ( a ; l / 2 +
sup ( W ( t s) - W ( t )
Olslat

Remark 1. In case at = t condition (7.26) is equivalent with the corre-


sponding condition of the EFKP LIL of Section 5.2. However, condition
(7.25) does not produce the correct UUC in case at = t. Hence it is nat-
ural to conjecture that, in general, the UUC can be characterized by the
convergence of the integral of (7.26). It turns out that this conjecture is
not exactly true. In fact Grill (1991) obtained the exact description of the
upper classes under some weak regularity conditions on at. He proved

THEOREM 7.15 Assume that

where 0 < 6 < 1, g(y) is a slowly varying function as y -+ co, CO,C1 are
positive constants.
Let f ( t ) > 0 (t > 0 ) be a nondecreasing function. Then

f ( t )E UUC (a;"2Ji(t, a t ) ) (i = 1 , 2 , 3 , 4 )

if and only if

1" ( 1 + g ( t ) f 2 ( t ) )f o e x p
at
(-q) dt < co.

In order to illustrate the meaning of this theorem we present a few examples.


INCREMENTS 69

Example 4. Let at = (logt)" ( a > 0). Then g ( t ) = a / l o g t and

E UUC (a;1/2Ji) if and only if E > 0 (i = 1 , 2 , 3 , 4 ; p = 3 , 4 , 5 , . . .).

Example 5. Let at = exp((1ogt)") (0 < a < 1). Then g ( t ) = a(logt)"-l


and

fP,E(t)

P- 1
21ogt - 2(1ogt)" + ( 3 + 2a)10g2 t + 2 z l o g j t + ( 2 + E ) log,t
j=3

i f a n d o n l y i f & > O ( i = 1 , 2 , 3 , 4 ;p = 3 , 4 , 5 ,...).

Example 6. Let at = ta (0 < a < 1 ) . Then g ( t ) = a and

fP,E (t)
P- 1
=
( 2(1 - a )logt + 510g2 t

E UUC (aF1''Ji)
+ 2 z 1 0 g j t + ( 2 +&)logpt
if and only if
j=3

E > 0 (i = 1 , 2 , 3 , 4 ; p = 3 , 4 , 5 , . . .).

Example 7. Let at = at (0 < a: < 1). Then g ( t ) = 1 and

P-1
=
( 210g, t + 510g, t + 2 c l o g j t + (2 + &) logp t
j=4 ) 1/2

E UUC a-l/'Ji > 0 (i = 1 , 2 , 3 , 4 ; p= 4 , 5 , 6 , . . .)


( t ) if and only if E

Theorem 7.15 does not cover the case a t / t -+ 1. As far as this case is
concerned we present
70 CHAPTER 7

THEOREM 7.16 (Grill, 1991). Let at = t ( l - ,8(t)) where P(t) is de-


creasing to 0 and slowly varying as t -+ 00 and f ( t ) > 0 be a nondecreasing
continuous function. Then

f ( t )E UUC (a;'/'Ji(t, a t ) ) (i = 1 , 2 , 3 , 4 )
if and only if

lrn+ (1 , 8 ( t ) f 2 ( t ) ) exp (-q) dt < 00.

The characterization of the lower classes is even harder. At first we present


a theorem giving a nearly exact characterization of the lower classes when
at is not very large.
THEOREM 7.17 (Grill, 1991). Assume that

Then for any i = 1 , 2 , 3 , 4 we have

LLC ( a ; 1 / 2 J i ( t , a t ) ) if K > Ki,


+
( 2 log A ( t ) log log A ( t )- K)l/' E
LUC (a;1/2Jz(t,at)) if K < Ki

log !!. < Kz 5 logn,


4 -
7-r
log - < K3 5 log47-r,
4 -
7-r
log - < K4 5 lOg7-r.
16 -
If in addition either at is of the form

with

or
INCREMENTS 71

then

log < K3 5 logn,


4 -
IT x
log- < K4 5 log-.
16 - 4

Remark 2. A very similar result was obtained previously by RBvBsz (1982).


However, some of the constants given there are not correct.

Example 8. Let at = t e - r l o g l o g t (0 < r < 00). Then

A(t) = (exp(rloglogt))(loglogt)-l f co.

Hence
Ji(t, at) = 1 as.
lim inf
t+oo (2atr loglogt)l/2
This result was proved by Book and Shore (1978).
If at is so large that the condition A(t) f 00 does not hold, the situation
is even more complicated. We have two special results (Theorems 7.18 and
7.19) only.

THEOREM 7.18 (Csbki-R&&z, 1979, Grill, 1991). If A(t) = C > 0,


i.e. at = Ct(loglogt)-l then with probability 1 we have

liminf J1(t,at) =
+m if c < I-,
t-kw -W if c > r,
where r is an absolute, positive constant, its exact value is unknown.
If A(t) + 0, then

where

Pb) =
+
( ( 2 r 1)x - 1) l j 2
+
r ( r 1)
and
r= [i]
72 CHAPTER 7

Remark 3. Note that if a t = at and l/a is an integer, then P ( u t / t ) = a.


We return to the discussion of this theorem in Section 8.1 in the spe-
cial case when at = at (0 < a 5 1). The first part of Theorem 7.18
suggests the following question. Does there exist a function at for which
lim inft,, 51 (t,ut) = 0 a s . ?

THEOREM 7.19 (Csaki-Rkvksz, 1979). If A(t) -+ 0 , then

where
-1/2
b ( t ) = (;a,a(t)) .

Remark 4. In case a t = t , Theorem of Chung (Section 5.3) implies that

liminf d(t)J4(t,at) = I a.s.


t+oo

However, this relation does not follow from Theorem 7.19.

Remark 5. Ortega and Wschebor (1984) also investigated the upper


classes of the “small increments” of W ( . ) .These are defined as follows:

where a , is a function satisfying conditions (i) and (ii) of Theorem 7.13.

Remark 6. Hanson and Russo (1983/B) studied a strongly generalized


version of the questions of the present paragraph. In fact they described
the limit points of the sequence

for a large class of the sequences 0 5 (Yk < p k < 00.


Finally we present a result on the behaviour of J5 (., .).
INCREMENTS 73

THEOREM 7.20 (Csorgo-Rkv&z, 1979/B). Assume that at satisfies con-


ditions (i), (ii) of Theorem 7.13. Then

liminfKtJg(t,at) = 1 a s .
t+m

where

and
Kt= (8(logta,’ + log log t )) l’
T’at
If (iii) of Theorem 7.13 is also satisfied then
lim KtJg(t,ut) = 1 a s .
t+m

The following examples illustrate what this theorem is all about.


Example 9. Let at = 8 logt/T2 hence iCt -+ 1 (t + 00). Then Theorem
7.20 tells us that for all t large enough, for any E > 0 and for almost all
w E 52 there exists a 0 5 s = s ( ~ , E , w 5
) t - at such that

sup +
( W ( s u)- W ( s ) (5 1 + E
o<u< 5 log t
but, for all s E [0, t - at] with probability 1

sup IW(s + u)- W(s)I 2 1 - E .


o<u< 5 log t
At the same time Theorem 7.13 stated the existence of an s E [0, t - at] for
which, with probability 1,

1 W (s I
+ ;Fz8 log t ) - W ( s ) 2 (: -E) log t

but for all s E [0, t - ut]

Example 10. Let at = t. Then Theorem 7.20 implies


74 CHAPTER 7

Hence we have the Other LIL (cf. (5.9)).


Example 11. Let at = (logt)’/2 hence iCt M 2 f i ( l 0 g t ) ~ / ~ / n .Then
Theorem 7.20 claims that for all t large enough, for any E > 0 and for
almost all w E R there exists an s = s ( t ,E , w ) E [0, t - at] such that

sup JW(S + U) - w(S)J I (1 + E ) z ( i o g t ) - 1 / 4 .


O<u<(log t)1/2 fi
That is to say the interval [0, t - at] has a subinterval of length (logt)1/2
where the sample function of the Wiener process is nearly constant. In fact,
the fluctuation away from a constant is as small as (1 + ~ ) d - ’ / ~ ( l o t)-ll4.
g
This result is sharp in the sense that for all t large enough and all
s E [0, t - at] we have with probability 1

n
SUP IW(S + U) - w ( s )2~(1 - ~ ) - ( i o g t ) - ~ / ~ .
O<ugogt)’/2 Js
Clearly, replacing the condition at = (logt)ll2 in Example 11 by at =
o(logt), we also find that there exists a subinterval of [0, t - at] of size at
where the sample function is nearly constant. Cstiki and Foldes (1984/A)
were interested in the analogue problem when the term “nearly constant”
is replaced by ‘(nearly zero”. They proved

THEOREM 7.21 Assume that at satisfies conditions (i), (ii) of Theorem


7.13. Then

liminf ht
t-im
inf
OLs<t-at O<u<at
sup IW(s + u ) /= 1 a.s.
- -

where

ht= (4 log(ta,’) + 8 log log t


7r2at
If (iii) of Theorem 7.13 is also satisfied then

Example 12. Letting at = t we obtain the Other LIL (cf. (5.9)).

Example 13. If at = o(1ogt) then ht -+ 00 and


INCREMENTS 75

while in case at = 4c27r-2 log t we have ht -+ c-l as t -+ co and

lim inf sup IW(s +.)I = c.


t+m O l s < t - - a t o _ <_ ~ < ~ ~

(Compare Example 13 in case c = 1 and c = fi with the first part of


Example 9.)
Theorems 7.13-7.19 gave a more or less complete description of the
strong behaviour of J i ( t , a t ) (i = 1,2,3,4). To complete this Section we
give the following weak law:
THEOREM 7.22 (Deheuvels-Rkvksz, 1987). Let t/at = d t . Assume that

lim dt = 00. (7.27)


t+m

Then for any i = 1 , 2 , 3 , 4 in probability

(2 l ~ g d t ) ~ / ~ Ji(t, ~ (2 log &)I/')


( a t a~t )/ -
lim = 1/2. (7.28)
t+m log log dt
We also mention that the proof of Theorem 7.22 is based on the following:
THEOREM 7.23 (Deheuvels-Revksz, 1987). Assume (7.27). Then f o r
any -co < y < co we have

-+ exp(-ePY) (t + co)
if i = 2 , 3 , 4 and

+ exp(-ePY) (t -+ 00).
Note that in the above two theorems we have no regularity conditions on
atexcept (7.27).
In order to understand the meaning of (7,28) consider the case i = 2
and assume
1% dt
= T (O<r<co) (7.29)
log log t
In this case Theorem 7.13 implies that Jz(t, a t ) can be as large as

yt-l = (2at)1/2((r + 1)loglogt)l/2.


76 CHAPTER 7

In the same case Theorem 7.17 implies that J 2 ( t ,a t ) can be as small as

(2at)1/2(r log log t)?

(7.28) describes the “typical” behaviour of Jz(t, a t ) under the condition of


(7.29). Namely it behaves like

It is worthwhile t o mention an equivalent but simpler form of Theorem


7.23.
THEOREM 7.24

and

IW(s + 1) - W(s)l5 f ( y , t )

where

f ( y , t ) = (210gt)-1/2
(+
y
1
2 2 7
2logt - - loglogt - - log7r I

Let us give a summary of the results of this section. To study the proper-
ties of the processes J i ( t , u t ) (i = 1 , 2 , 3 , 4 , 5 ) we have to assume different
conditions on at. For the sake of simplicity from now on we always assume
that a t is nondecreasing and satisfies conditions (i) and (ii) of Theorem
7.13.
Then the limit distributions of Ji(., .) for i = 1 , 2 , 3 , 4 are given in The-
orem 7.23. Observe that the limit distributions in case i = 1 and in case
i = 2 , 3 , 4 are different. The limit distribution of J 5 ( . , .) is unknown. The
exact distribution is not known in any case.
A description of the upper classes of Ji(., .) (i = 1 , 2 , 3 , 4 ) is given in
Theorem 7.14 but there is a big gap in this theorem between the description
of UUC(Ji) and ULC(Ji), i.e. there is a big class of very regular functions
for which Theorem 7.14 does not tell us whether they belong to the UUC(Ji)
or to the ULC(Ji). This gap is filled in by Theorem 7.15 if at satisfies a
weak regularity condition. However, this regularity condition excludes the
case a t l t + 1. This case is studied in Theorem 7.16. The above-mentioned
results do not tell us anything about J 5 ( . , . ) . In case if at is not very large
INCREMENTS 77

(condition (iii) of Theorem 7.13 is satisfied) a very weak result is given in


Theorem 7.20.
The lower classes of Ji(., .) (i = 1,2,3,4) are “almost” completely de-
scribed if at << tlloglogt by Theorem 7.17. If at does not satisfy this
condition then Theorems 7.18, 7.19 (resp. 7.20) tell something about the
lower classes of J1(., .), J4(.,.) (resp. J 5 ( . , .)) but we do not have a complete
characterization and we do not have any results (except trivial ones) about
the lower classes of Jz(., .) and J3(., .).

7.3 The increments of SN


By the Invariance Principle 1 (cf. Section 6.3) we obtain that any theorem
of Section 7.2 will remain true, replacing the Wiener process by a random
walk (i.e. replacing Ji by Ii(i = 1 ,2 ,3 ,4 ,5 ))provided that 7;’ is big enough
or equivalently a, is big enough. In fact Theorems 7.13, 7.18-7.21 (resp.
7.14-7.17) remain true if a, >> logn (resp. a, >> ( l ~ g n ) while~ ) Theorems
7.23 and 7.24 remain true as they are. Hence we only study the increments
of S, in the case when limn+oo a , ( l ~ g n ) - ~ += 0 for any E > 0.
A trivial consequence of Theorem 7.1 (resp. Theorem 7.13) (cf. also
Example 1 of Section 7.2) is
THEOREM 7.25
1(n,lg n )
I
lim = 1 a.s. (7.30)
fl--tcc lgn

lim
n+m
J 1 ( . , k n ) = lim
lgn ,--so3
J1 (logn
nl&2
= (210g2)~” a.s. (7.31)
-
log 2

Remark 1. Comparing (7.30) and (7.31) we can see that the behaviours of
1 and J1 are different indeed if a, = clog n (0 < c < co). As a consequence
I
we also obtain that the rate O(1ogn) of Invariance Principle 1 (cf. Section
6.3) is the best possible one. This observation is due to BBrtfai (1966) and
Erdos-Rhyi (1970).
Theorems 7.2, 7.3, 7.4 imply much stronger results on the behaviour of
Ii(., .) than (7.30) of Theorem 7.25. In fact we obtain
THEOREM 7.26 Assuming diflerent growing conditions on {a,} we get
(i) I f f o r some E >0
a, 5 [Ign - Iglglgn +IgIge - 2 - €1 = A.,
78 CHAPTER 7

Then
Ii(n,a,) = a, a.s. (i = 1 , 2 , 3 , 4 )
for all but finitely many n, i.e. I i ( n , a,) is AD.

(ii) If for some E >0

Then
Ii(n,a,) = a, or a, - 2 a.s (i = 1 , 2 , 3 , 4 )
for all but finitely many n, i.e. I i ( n ,a,) is QAD.

(iii) I f for some E >0

Then

Ii(n,a,) = a, or a, - 2 or a, - 4 a s . (i = 1,2,3,4)
for all but finitely many n, i.e. I i ( n , a,) is QAD.

(iv) In general, if

+ +
d,(T) = [ l g n T l g l g n (1 + E ) lglglgn] < a, 5 X,(T 1) +
= [lgn+(T+1)lglgn-lglglgn-lg((T+1)!)+lglge-2-~].

Then
Ii(n,a,) = a, - 2 T - 2 or a, - 2T a.s.
for all but finitely many n, i.e. I i ( n ,a,) is QAD,
and if
A,(T +
1) < a, 5 d,(T 1) +
then

I i ( n , a n ) = a, - 2T - 4 or a, - 2T - 2 or a, - 2T,

for all but finitely many n, i.e. I i ( n , a,) is QAD.

T h e above theorem essentially tells us that I i ( n , a,) is QAD with not more
+
than three possible values i f a, 5 lg n T l g lg n for some T > 0. T h e next
theorem applies for somewhat larger a,.
INCREMENTS 79

THEOREM 7.27 (Deheuvels-Erd6s-Grill-Revesz, 1987). Let a, = O(lg n )


and 0 < T = Tan< a,/2 be nondecreasing sequences of integers. Then
m

a, - 2T E LLC(li(n,a,)) if e~p(-2~p(2,)<
) 00,
n=l
00

a, - 2T E LUC(Ii(n,a,)) if exp(-2,p(2,)) = co,


n= 1
00

a, - 2T E ULC(Ii(n,a,)) if C 2,p(2,) = 00,


n=l
m

an - 2T E U u C ( l i ( ~ ~ , a ~if) ) C 2,p(2,) < 00,


n=l

where
p(.) = (1 - 2) ;"( 2-an-1 ')
Here we present a few consequences.

Consequence 1. Let
an = lgn + f (n)
be a nondecreasing sequence of positive integers with f (n)= o(1gn).

(i) Assume that


lim
n+m
fo
(1gn)E
= 0 for any E > 0.

Then Ii(n,a,) is &AD and there exist an a l ( n ) E UUC(li(n,a,))


and an a4(n) E LLC(Ii(n,a,)) such that al(n) - a4(n) 5 3.

(ii) Assume that


f(n) = 0 ((1gn)O) (0 < 0 < 1).
Then Ii(n,a,) is QAD and there exist an a1( n )E UUC(Ii(n, a,)) and
an ad(n) E LLC(Ii(n,a,)) such that al(n) - a4(n) 5 2/(1- 0 ) 1. +
(iii) Assume that
lim f ( n ) = oo for any E > 0.
n+00 (1gn)l-E
Then Ii(n,an) is not QAD.
80 CHAPTER 7

Consequence 2. Let a, = [Clgn] with C > 1. Then


C ( 1 - 2 / 3 ) 1 g n + ( l + E ) 2 p l g I g n EUUC(li(n,a,)),
C ( 1 - 2 P ) l g n + (1 - & ) 2 p l g l g n E ULC(&(n,a,)),
C(1 - 2p) l g n - 2plglgn - 4plglglgn
+4plg(l -2p) + 4 p l g l g e + 2 p l g 7 r + 6 p + l + ~ E LUC(li(n,a,)),
C ( 1 - 2P)lgn - 2plgIgn - 4plgIglgn
+ + + +
+4plg(l - 2p) 4plgIge 2plg7r 6p 1 - E E LLC(li(n,a,)),

where ,B is the solution of the equation

(2P4(1-P)
1-4
)c =2,
-1
p= ( 2 I g Y )

and E is an arbitrary positive number.

Remark 2. Consequence 2 above is a stronger version of an earlier result


of Deheuvels-Devroye-Lynch (1986).
In the case a, >> Ign we present the following:

THEOREM 7.28 (Deheuvels-Steinebach, 1987). Let a, be a sequence of


positive integers with a, = [a,] where &(logn)-p is decreasing for some
p > 0 and a,/logn is increasing for some p > 0 . Then for any E > 0 we
have

Q,U, - t i 1 loga, + +
(3/2 & ) t i loglogn
1 E uUC(li(n,~,)),
a,a, - t i ' loga, + (3/2 - & ) t i 1loglogn E ULC(li(n,a,)),
a,a, -t,1loga, +(1/2+&)t~110glogE n Luc(l~(n7a,))7
a,a, - t,l loga, + (1/2 - & ) t i 1loglogn E LLC(lz(n, a,))

where a , is the unique positive solution of the equation

and
1
t - -log -I. + a ,
"-2 1-a,
Note that a, x (2~;' logn)l/'.
INCREMENTS 81

In order to study the properties of I5 resp.

first we mention that by the Invariance Principle the properties of J5 resp.

will be inherited if a, 2 ( l ~ g n ) ~ +( 'E > 0 ) . In fact Theorems 7.20 and


7.21 will remain true if J5 (resp. J,*) are replaced by I5 (resp. I:) and
a, 2 ( l ~ g n ) ~ (+E €> 0). Hence we have to study the properties of 1 5 (resp.
- ~0 ( n -+ m) for any E > 0. It turns out that
I:) only when a , ( l ~ g n ) - ~ -+
Theorem 7.21 remains true if a,/logn -+ 00 ( n + 00). In fact we have
THEOREM 7.29 (Csaki-Foldes, 1984/B). Assume that a, satisfies con-
ditions (i) and (ii) of Theorem 7.13 and
Iim a,(logn)-l = 00
,--to3

Then
lim inf h,I; (n,a,) = 1 a s .
n+ co
where h, is defined in Theorem 7.21. If condition (iii) of Theorem 7.13 is
also satisfied, then
lim h,I:(n,a,) = 1 a.s.
,--to3

If a, = [clog n] then we have


THEOREM 7.30 (Csaki-Foldes, 1984/B). Let a, = [clogn] (c > 0) and
define a* = a*(c) > 1 as the solution of the equation
7r
cos - = exp
2a* (-k)
i f a * ( c ) is not an integer then
I:(n, a,) = [a*(c)] a.s.
for all but finitely many n, i.e. I,* is AD,
if a*(c) is an integer then
a*(c) - 1 5 I;(n,a,) 5 a*(c) as.

for all but finitely many n, i.e. I; is QAD. Moreover


I:(n,a,) = a * ( c )- 1 i.0. a.s
and
I:(n, a,) = cr*(c) i.0.a s .
82 CHAPTER 7

The properties of I5 are unknown when logn << a, 5 ( l ~ g n ) However,


~.
we have

THEOREM 7.31 (CsBki-Foldes, 1984/B). Let a , = [clogn] ( c > 0 ) and


define a = Q(C) > 1 as the solution of the equation

7T
cos - = exp
2a
(-:)
if a(.) is not an integer then

15(n1u,) = [ a ( c ) ] U.S.

for all but finitely many n, i.e. I5 is AD,


if a(.) is an integer then

a ( c ) - 1 5 15(n,a,) 5 a(c) a.s.


for all but finitely many n, i.e. 15 is QAD. Moreover

I5 = a ( c ) - 1 i.0.a.s.

and
15 = a(c) i.0.a.s.
Chapter 8

Strassen Type Theorems

8.1 The theorem of Strassen


The Law of Iterated Logarithm of Khinchine (Section 4.4) implies that for
any E > 0 and for almost all w E R there exists a random sequence of
w ,) < 722 = ~ Z ( E , W ) < . . . such that
integers 0 < n1 = n 1 ( ~

S ( n k ) 2 (1 - ~ ) ( 2 n loglognk)1'2
k = (1 - ~ ) ( b ( n k ) ) - ' . (8.1)

We ask what can be said about the sequence { S ( j ) ;j = 1 , 2 , . . . ,n k } (pro-


vided that (8.1) holds). In order to illuminate the meaning of this question
we prove

THEOREM 8.1 Assume that n k = n k ( ~w, ) satisfies (8.1). Then


1 1 - 2E -1
S([nk/21) 2 (1- E y Y n k ) 2 -(b(4) a.s. (8.2)

for all but finitely many lc.


Proof. Let 0 5 ct < 1 - 2~ and assume that
ct(b(nk))-' L S([nk/2l) L ( a + m o k ) ) - ' . (8.3)

Then by (8.1)

S(nk) - S([nk/2]) 2 (1- a - 2E)(b(nk))-l. (8.4)

By Theorem 2.10 the probability that the inequalities (8.3) and (8.4) si-
multaneously hold is equal to O((1og nk)-2(a2+(1-a-2E)2) 1.
Note that if a # 1/2 and E is small enough then 2 ( ~ x ~ + ( l - c t - 2 ~ >
) ~1.)
Hence by the method used in the proof of Khinchine's theorem (Step 1) we
obtain that the inequalities (8.3) and (8.4) will be satisfied only for finitely
many lc with probability 1. This fact easily implies Theorem 8.1.
Similarly one can prove that for any 0 < x < 1

S ( [ x n k ]2
) (1 - ~ ) x S ( n k )a.s. (8.5)

83
84 CHAPTER 8

for all but finitely m a n y lc.


(8.5) suggests that if nk satisfies (8.1) and lc is big enough then the
process {S([xnk]); 0 5 x 5 l} will be close to the process {xS(nk); 0 5
z 5 l}. It is really so and it is a trivial consequence of

STRASSEN’S THEOREM 1 (1964). T h e sequence

is relatively compact in C ( 0 , l ) with probability 1 and the set of its limit


points is S (see notations to Strassen type theorems).
The meaning of this statement is that there exists an event Ro c R of
probability zero with the following two properties:

Property 1. For any w 4 Ro and any sequence of integers 0 < n1 < n2 <
. . . there exist a random subsequence nkj = nkj ( w ) and a function f E S
such that
?,s, ( x , w ) -+ f (x)uniformly in x E [0, I].

Property 2. For any f E S and w 4 Ro there exists a sequence of integers


nk = nk(w, f ) such that

,s, (2, w ) f (x)uniformly in z E [0, I].


The Invariance Principle 1 of Section 6.3 implies that the above theorem is
equivalent to

STRASSEN’S THEOREM 2 (1964). T h e sequence

w,(z) = b,W(nz) (0 5 x 5 1; n = 1 , 2 , .. .)
is relatively compact in C ( 0 , l ) with probability 1 and the set of its limit
points i s S .
Remark 1. Since If (1)l 5 1 for any function f E S and f(z)= z E S,
Strassen’s theorem 1 implies Khinchine’s LIL.

Consequence 1. For any E > 0 and for almost all w E R there exists a
To = TO(&,w)such that if
W ( T )2 (1 - &)(b(T))-l
for some T 2 TO
STRASSEN T Y P E THEOREMS 85

Consequence 1 tells us that if W ( t )“wants” to be as big in point T as it


can be at all then it has to increase in (0, T ) nearly linearly (that is to say
it has to minimize the used energy).
The proof of Strassen’s theorem 2 will be based on the following three
lemmas.

LEMMA 8.1 Let d be a positive integer and c q ,( ~ 2 , .. . , a d be a sequence


of real numbers for which
d

i= 1

Further, let

+
W * ( n )= cqW(n) az(W(2n) - W ( n ) +. +
) . . Qd(W(d72)- W ( ( d- 1)n)).

Then
limsupb,W*(n) = 1 a s . (8.6)
n+rx

and
liminf b,W*(n) = -1 a.s. (8.7)
n+m

Proof of this lemma is essentially the same as that of the Khinchine’s LIL.
The details will be omitted.
The next lemma gives a characterization of S.

LEMMA 8.2 (Riesz-Sz.-Nagy, 1955, p. 75). Led f be (I red walued func-


tion on [0,1]. The following two conditions are equivalent:

(i) f is absolutely continuous and Jt(f’)2d~5 1,

(ii)

(f - f (v))2
(i)
2
i=l 1/r
~ l f o T a n y T = 1 , 2...
,

and f is continuous on [0,1].


In order to formulate our next lemma we introduce some notations. For
any real valued function f E C ( 0 , l ) and positive integer d, let f ( d l be the
linear interpolation o f f over the points i / d , that is

f ( d ) ( z ) = j ( $+) d ( f (7)
(f)) (Pi)
-f
86 CHAPTER 8

if i/d 5 x 5 (i + l)/d, i = 0 , 1 , . . . , d - 1.
Let

where s d C sby Lemma 8.2.


LEMMA 8.3 The sequence { w i d ) ( x ) ;0 5 x 5 I} is relatively compact in
Cd with probability 1 and the set of its limit points is sd.
Proof. By Khinchine's LIL and continuity of Wiener process our statement
holds when d = 1. We prove it for d = 2. For larger d the proof is similar
and immediate. Let V, = ( W ( n )W(2n)
, - W ( n ) )( n = 1 , 2 , . . .) and a,p
be real numbers such that a2+p2 = 1. Then by Lemma 8.1 and continuity
of W the set of limit points of the sequence

is the interval [-1,+1]. This implies that the set of limit points of the
sequence {b,Vn} is a subset of the unit disc and the boundary of the unit
circle belongs to this limit set.
Now let V; = (W(n),W(2n) - W ( n ) , W ( 3 n - ) W(2n)). In the same
way as above one can prove that the set of limit points of { bnV;} is a subset
of the unit ball of R3 which contains the boundary of the unit sphere. This
fact in itself already implies that the set of limit points of {b,V,} is the
unit disc of R2 and this, in turn, is equivalent to our statement.
Proof of Strassen's Theorem 2. For each w E R we have

SUP
O<X<l
JWn(Z) - wL?(x)l 5 SUP SUP
OjxLl Qjs<lld
lw,(x + s) - w,(x)l,
hence by Theorem 7.13 (cf. also Example 3 of Section 7.2)

limsup sup lw,(x) - wid)(x)I


= d-1/2 a.s.
n+w a<.<1

Consequently we have the Theorem by Lemmas 8.2 and 8.3 where we also
use the fact that Lemma 8.2 guarantees that S is closed.
The discreteness of n is inessential in this Theorem. In fact if we define
STRASSEN T Y P E THEOREMS 87

then we have

STRASSEN’S THEOREM 3 (1964). The net w t ( x ) is relatively com-


p a c t in C ( 0 , l ) with probability 1 and the set of its limit points is S.

As an application of Strassen’s theorem we sketch the proof of Theorem


7.18 in the special case at = at.
At first we mention that Strassen’s theorem implies that
lim sup b t ~(lt ,at) = a 1 / 2 a.s.
t+m
which can be obtained by considering the function
o 5 x 5 a,
f(x)= { xa-l/’
a1/2
if
if a <x 5 1
which belongs t o the Strassen’s class S (cf. also Theorem 7.13 and Example
3 of Section 7.2). The fact that bt is the right normalizing factor for the
lim inf also follows from Strassen’s theorem. Let
C, = - lim inf bt J1 ( t ,at)
t+oo
In case a = 1 it is well known that C1 = 1 and this can be obtained
by considering the function f ( s ) = -s (0 5 s 5 1) in S. Considering the
function f ( s ) = -s it is also immediate that C, 2 a. Theorem 7.18 claims,
however, that equality holds (i.e.C, = a ) if and only if 1/a is an integer;
in other cases C, > a.
Now we show that the Strassen’s theorem implies that

(8.8)
Define the function x ( s ) as follows: if 1/a = r (an integer), then let x ( s ) =
-s. If l / a = T + r, where T is an integer and 0 < T < 1, then split the
+
interval [O, 11 into 27- 1 parts with the points
u2i = ia, (i = 0,1,2,. . . ,T),
u2i+l +
= (i ~ ) a .(i = 0 ,1 ,2 , . . . ,r ) .
Let ~ ( s be
) a continuous piecewise linear function starting from 0 (i.e.
z(0) = 0) and having slopes

if u2i < s < u2i+1,


x’(s) =
if u2i-1 < s < u2i.
88 CHAPTER 8

It is easily seen that z(s) so defined is in Strassen’s class, i.e. z(0) = 0, z(s)
is absolutely continuous for 0 < s < 1 and
1
so
d 2 ( s ) d s = 1. Since

z(s + a ) - (.) = -Ca1 0 5 s 5 1- a


we have (8.8). Unfortunately we cannot accomplish the proof of Theorem
7.18 by showing that z(s) defined above is extremal within S. In Csziki-
Rkvksz (1979) the proof was completed by some direct probabilistical ideas.
The details are omitted here.
Here we mention a few further applications of Strassen’s theorem given
by Strassen (1964). At first we present the following:
Consequence of Strassen’s Theorems 1 and 2. If cp is a continuous
functional from C ( 0 , l ) to R1 then with probability 1the sequences cp(wn(t))
and cp(s,(t)) are relatively compact and the sets of limit points coincide with
cp(S). Consequently

lim sup cp(wn(t))= lirn sup cp(s,(t)) = sup cp(z) a s .


n+m n+cc XES

where f ( t ) (0 5 t 5 1) is a Riemann integrable function with

we obtain
F(t)=
6’ f ( s ) d s E L2[0,11,

limsup
n+m 6’ w n ( t ) f ( t ) d t= limsup
n+oo I’ s,(t)f(t)dt

(8.9)

and by integration by parts we get

(8.10)

The above consequence also implies:


STRASSEN T Y P E THEOREMS 89

For any a 2 1 we have

(8.11)

in particular
n

Remark 2. In order to prove (8.11) we have to prove only that

This can be done by an elementary but hard calculation.


Similarly we obtain
n

++-
lim sup b ( n )
n+oo
= 2p as.
CISil
i=l

where p is the largest solution of the equation

A further application given by Strassen is the following. Let 0 5 c 5 1


and
1 if Si > c(b(i))-l,
ci =
0 otherwise.
Consider the relative frequency T~ = n-l Cy=3ci . We have
limsupy, = 1-exp
n+cx

Strassen also notes:


90 CHAPTER 8

“For c = 1/2 as an example we get the somewhat surprising result that


with probability 1 for infinitely many n the percentage of times i 5 n when
Si 2 1/2(2iloglogi)’/2 exceeds 99.999 but only for finitely many n exceeds
99.9999.”
Finally we mention a very trivial consequence of Strassen’s theorem.
THEOREM 8.2 T h e set

{ b t m + ( z t ) ; 0 5 2 5 1) (t -+ 00)
and t h e sequence

are relatively compact in C ( 0 , l ) with probability 1 and t h e set of their limit


points is t h e set of t h e nondecreasing elements of S . T h e analogous state-
m e n t s for m(t) and M ( n ) are also valid.

8.2 Strassen theorems for increments


As we have already mentioned, Khinchine’s LIL is a simple consequence of
Strassen’s theorem 1. Here we are interested in getting such a Strassen type
generalization of Theorem 7.13. At first we mention a trivial consequence
of Theorem 7.13.
Consequence 1. For almost all w E R and for all E > 0 there exists a
To = T o ( E , wsuch ) that for all T 2 To there is a corresponding 0 5 t =
t ( w , E , T ) 5 T - UT such that
W(t + U T ) - W ( t )2 (1- E ) ( y ( T ,U T ) ) - ’ M (1- E ) ( 2 a log
~ T u T ~ ) ~(8.12)

provided that UT satisfies conditions (i), (ii), (iii) of Theorem 7.13.
Knowing Consequence 1of Section 8.1 we might pose the following ques-
tion: does inequality (8.12) imply that W ( z )is increasing nearly linearly
+
in ( t ,t U T ) ? The answer to this question is positive in the same sense as
in the case of Consequence 1 of Section 8.1.
In order to formulate our more general result introduce the following
notations:

(9 r t , T ( Z ) = ?(TIU T ) ( W ( t+ Z U T ) - W ( t ) )(0 5 2 i 1)’


c C(0,l) as follows:
(ii) for all w E R define the set VT = VT(W)

: o 5 t 5 T - aT},
vT = {rt,T(2)
STRASSEN T Y P E THEOREMS 91

(iii) for any A C C(0,l) and E > 0 denote U ( A ,E ) the E-neighbourhood of


A, that is a continuous function a(.) is an element of U ( A ,E ) if there
- la(x) - a(.)/ 5 E .
exists an u ( x ) E A such that suposz<l
Now we present
THEOREM 8.3 (Rkvksz, 1979). For almost all w E R and for all E >0
there exists a TO= T o ( w , Esuch
) that

U(VT(W),E)3 s (8.13)

(8.14)
if T 2 TOprovided that aT satisfies conditions (i), (ii), (iii) of Theorem
7.13.
To grasp the meaning of this Theorem let us mention that it says that:
(a) for all T large enough and for all s(z) E S there exists a 0 < t < T - u ~
such that r t , T ( z ) (0 5 z 5 1) will approximate the given s(x),
(b) for all T large enough and for all 0 < t < T - U T the function rt,T(z)
(0 5 x 5 1) can be approximated by a suitable element s(z) E S.
We have to emphasize that in Theorem 8.3 we assumed all the conditions
(i), (ii), (iii) of Theorem 7.13. If we only assume conditions (i) and (ii) then
we get a weaker result which contains Strassen's theorem 3 in case at = T .
THEOREM 8.4 (Rkvksz, 1979). A s s u m e that UT satisfies conditions (i)
and (ii) of Theorem 7.13. T h e n f o r almost all w E SZ and for all E > 0 there
exists a To = T ~ ( E , w
such
) that

vT(w) c U(S,&)

i f T 2 To.
Further, f o r any s = s(x) E S, E > 0 and f o r almost all w E R there
exist a T = T ( E , ws ,) and a 0 5 t = t ( E , u , s ) 5 T - aT such that

Remark 1. The important difference between Theorems 8.3 and 8.4 is


the fact that in Theorem 8.3 we stated that for every T big enough and for
every s(z) E S there exists a 0 5 t 5 T - U T such that rt,T(z) approximates
the given s(z); while in Theorem 8.4 we only stated that for every s(z) E S
92 CHAPTER 8

there exists a T (in fact there exist infinitely many T but not all T are
suitable as in Theorem 8.3) and a 0 5 t 5 T - UT such that r t , T ( x )
approximates the given s(x).
In other words if UT is small (condition (iii) holds true), then for every T
(big enough) the random functions I’t,T(x) will approximate every element
of S as t runs over the interval [0, T - a*]. However, if UT is large then for
any fixed T the random functions I’t,T(x) (0 5 t 5 T - U T )will approximate
some elements of S but not all of them; all of them will be approximated
when T is also allowed to vary.

8.3 The rate of convergence in Strassen’s


theorems
Strassen’s theorems 1 and 2 imply: for any E > 0 and for almost all w E R
w ) > 0 such that
there exists an integer no = no(&,

if n 2 no, equivalently there exists a sequence cn = E~ \ 0 such that

Sn(x, W ) E U ( S ,E n ) and w ~ ( W
x ),E U ( S ,E,) U.S.

for all but finitely many n.


It is natural to ask how can we characterize the possible E~ sequences in
the above statement. This question was proposed and firstly investigated
by Bolthausen (1978). A better result was given by Grill (1987/A), who
proved

THEOREM 8.5 Let

Then

Clearly Theorem 8.5 implies Property 1 of Section 8.1 but it does not
contain Property 2 of Section 8.1. As far as Property 2 of Section 8.1 is
concerned one can ask the following question.
STRASSEN T Y P E THEOREMS 93

Let f (z) be an arbitrary element of S. We know that for all E > 0 and
for almost all w E R there exists an integer n = n(E,W) resp. n = A ( E , w )
such that

Replacing E by E , in the above inequalities, they remain true if E, .1 0


slowly enough. We ask how such an E, can be chosen. This question was
raised and studied by Csaki. He proved
THEOREM 8.6 (Csaki, 1980). For any f (z) E S and 0 < c < 1 we have
sup IW,(Z) - f(x)l < c(loglogn)-l/’ i.0.a s .
o<xg

and 7r
sup (w,(x) - f(x)l 2 z(l - c)(logIogn)-l a.s.
O<x<l

for all but finitely many n .


If /t(f’(x))2dx = a < 1 then a stronger result can be obtained:
THEOREM 8.7 (Cs&ki, 1980, de Acosta, 1983). If f ( x ) E S,0 < c <1
and !l(f‘(z))’dx = a < 1 then

and
7r(l- c)
SUP
O<Z<l
Iw,(z) - f )I(. > 4(1- a)1/2 loglogn U.S.

for all but finitely many n.


In case J,’(f’(x))’dx = 1the best possible rate is available only for piecewise
linear and quadratic functions. Let f (x) be a continuous piecewise linear
function with f(0) = 0 and

f’(z) =pi if ai-1 < z < ui (i = l , 2 , . . .,k)


where a0 =0 < a1 < . . . < ukP1 < Uk = 1. Then we have
THEOREM 8.8 (Cs&i, 1980). If f(x) E S and Jt(f‘(x))’dx = 1 then
for any E > 0
SUP [Wn(z) - f ( ~ ) l < ~ ~ / ~ 2 - ~ / +~ ~~) ( -l o~g l/o ~g n() -i ~ /i~.0.US.
o<xg
94 CHAPTER 8

and
SUP IWn(z) - f(.)l > 7r2 / 3 2 - 5 / 3 ~ - 1 / 3(1 - E ) (log log n )-2/3 a.s.
05x51
for all but finitely m a n y n where
B = IP2 - P l l + ... + IPk - PlC--ll+ IPkl.

THEOREM 8.9 (CsBki, 1990). Let


a 2
f ( z ) = - z +bz (.>_(I)
2
and
rl
J, (f'(z))2dz= 1.
Then

where po i s the smallest eigenvalue of the differential equation


1
-y"
2
+
p ( a z = la bl)y = 0 +
with boundary condition y(-1) = y(1) = 0.
The rate of convergence for a larger class of functions f (.) is given by Gorn
and Lifschitz (1998).
Remark 1. Theorems 8.5 (resp. 8.8) imply that for any < 2/3
~ ( t5 )(t (2 log log t + (log log t ) l - P ) ) 'I2 as., (8.15)
if t is big enough resp.
W ( t )2 ( t ( 2 l o g l o g t - (1 + &)(loglogt)1/3,2/321/3B-1/3
i.0. a s .
(8.16)
) ) ll2
(8.15) and (8.16) clearly imply the Khinchine's LIL but they are much
weaker than EFKP LIL of Section 5.2 (cf. Consequence 1 of Section 5.2).
Remark 2. Applying Theorem 8.8 for f ( z ) = 0 (0 5 z 5 1) we obtain
(8.16) as a special case.

Remark 3. It looks an interesting question t o find a common generaliza-


tion of the results of Section 8.2 and those of Section 8.3, i.e. t o investigate
the rate of convergence in Theorems 8.3 and 8.4. This question was studied
by Goodman and Kuelbs (1988).
STRASSEN T Y P E THEOREMS 95

8.4 A theorem of Wichura


We have seen that Strassen’s theorem is a natural generalization of Khin-
chine’s LIL. Wichura proposed to find a similar (Strassen type) generaliza-
tion of the Other LIL (cf. (5.9)). He proved

WICHURA’S THEOREM (Wichura, 1977, Mueller, 1983). Consider


the sequence

Sn(U) = { (T) sup


1’2 x_<u + (. - 1 1)
y )x[nx]+l ;0 5 u5

( n = 1 , 2 , . . .) and the net

Let 4 be the set of nondecreasing, nonnegative functions g on [0,1]satisfying

Then with probability 1, the set of limit points of Sn(u) (resp. wi(u))in the
weak topology, as n 7co (resp. t 00) is G.

Remark 1. In order to see that this theorem implies the Other LIL we
only have to prove that g(l) 2 for any g ( - ) E G.
This page intentionally left blank
Chapter 9

Distribution of the Local Time

9.1 Exact distributions


Let
pl(k) = min{n : S, = k} (k = I, 2 , . . .).
Then
P(p1(Ic) > n } = P{M$ < I c } .
Hence by Theorem 2.4 we obtain
THEOREM 9.1

Especially
P{p1(1) > n } = 2-n
Consequently

(9.1)

and

Note that pl(2Ic + 1) takes only odd, pl(2k) takes only even numbers.
THEOREM 9.2 Let po = 0 and pk = min{j : j > P k - 1 , Sj = 0)
( I c = 1 , 2 , . . .). T h e n p1, p2 - p1 ,p3 - p 2 , . . . is a sequence of i.i.d.r.u's with

(9.2)

and
P{p1 > 2n) = 2-2n );( = P{S,, = O } .

97
98 CHAPTER 9

Proof. The statement that p1,p2 - p1,p3 - p 2 , . . . are i.i.d.r.v.'s taking


only even values is trivial. Hence we prove (9.2) only.

Clearly

P{p1 = 3k I x1 = +l) = P(p1 = 2k I x


1 = -1)
= P(Pl(1) = 2k - 1). (9.3)
Hence by (9.1) we have (9.2). The second statement of Theorem 9.2 is
a simple consequence of (9.2).
Remark 1. A simple calculation gives
00 00
2k - 2
P(p1 < m) = C P ( p , = 2k) = c 2 -
k=l k=l

Hence the particle returns to the origin with probability 1, i.e. we obtained
a new proof of P6lya Recurrence Theorem of Section 3.1. However, observe
that

i.e. the expectation of the waiting time of the recurrence is infinite.


Consider a random walk { S k ; k = 0 , 1 , 2 , . . .) and observe how many
times a given z(z = 0, f l ,f 2 , . . .) was visited during the first n steps, i.e.
observe
[ ( z , n )= # { k : 0 < k 5 n, s k = z}.
The process [(z, n) (of two variables) is the local time of the random walk
{Sk}.

THEOREM 9.3 For any k = 0 , 1 , 2 , . . . , n; n = 1 , 2 , . . . we have

P{[(o, 2n) = k } = ~ ( ( ( 0 , 2 7 2+ 1) = IC}= 2-2n+k (2,; k).

Equivalently
DISTRIBUTION OF T H E LOCAL T I M E 99

Proof. (9.3) implies that the distribution of p1 is identical with that of


+
pi(1) 1. It follows that the distribution of pk - k is identical to that of
P I @ ) , i.e.
P{pk - k > n} = P{pl(k) > n } = P{M$ < k } .
Further, we have

which implies the Theorem.


Applying Theorems 9.1 and 9.3 we can get the distribution of E(x,n).
In fact we have
THEOREM 9.4 Let x > 0, k > 0. Then f o r any k = 1 , 2 , . . .
P{t(x,n) = k )

= c
n-2k

j=,
p{e(x, n ) = k I P1(.> = j>P{Pl (x) = 3.1

= c
n-2k

j=,
P{E(O,n - j ) = k - l}P{pl(X) = j }

= C 2-2[(n-j)/2]-j+k-l
n-2k

j=X

if n + x i s even,
-
if n + x is odd.

F o r k = 0, x >0 by (2.9) we have


P{<(x,n) = 0) = P{Pl(X) > n} = P{M,+ < x}

Theorem 9.3 gave the distribution of the number of zeros in the path
So, 5'1,. . . , We ask also about the distribution of the number of
those zeros where the path changes its sign. Let
O(n) = # { k : 1 5 k < n, S k - l S k + l < 0)
100 CHAPTER 9

be the number of crossings (sign changes). Then as a trivial consequence


of Theorem 9.3 we obtain

THEOREM 9.5

P{0(2n + 1) = k}
3 cn

j= k
P{0(2n + 1) = k I ((0,2n) = j}P{[(O, 2n) = j }
2nk++1 1 )
= ~ ( D s ( 2 n n - ’ ) ~ = ~ ( n +
j=k
= 2P(Szn+1 = 2k + 1).

Proof. It is a trivial consequence of Lemma 3.1.

THEOREM 9.7 For any k = kl,3 ~ 2 ,... and 1 = 0 , 1 , 2 , .. . we have


21kl - 1
P{C(kPl) = O } = I__ (9.4)
2lk1

(9.5)

(9.6)

Proof. Without loss of generality we may assume that k > 0. Then

{[(k,Pl)=O}= {Xl = - 1 } U { X 1 =LSz # k , S 3 #k,...,Spl-l #k}.

Hence by Lemma 3.1


1 lk-1 2k-1
P(E(k,p1) = 0 ) = - + -- =-
2 2 k 2k
and (9.4) is proved.
DISTRlBUTlON OF T H E LOCAL TIME 101

For any z = k l , f 2 , . . . define

Po(%) = 0,
p l ( z ) = inf{l : 1 > 0, Sl = z},
.... ..,.......
p i + l ( z ) = inf(l : 1 > p i ( z ) , Sl = z} (i = 0 , 1 , 2 , . . .).

Then in case m > 0 we have

Hence, again by Lemma 3.1

= (!k)2 2k - 1
(3$-l

and (9.5) is also proved.


(9.6) is a simple consequence of (9.4) and (9.5).
Define the r.v.’s &n as follows: (2n (n = 1 , 2 , . . .) is the number of those
terms of the sequence S1, 5’2,. . . ,S 2 n which are positive or which are equal
to 0 but the preceding term of which is positive. Then (22n takes on only
even numbers and its distribution is described by

THEOREM 9.8

P{(2n=2k}= (J(
2k 2 n - 2 k
n - k )2-2” ( k = 0 , 1 , . . . , n). (9.7)

Proof. ( R h y i , 1970/A). Clearly

{& = O} = { M L = O}.
Hence by Theorem 2.4

i.e. (9.7) holds for k = 0. It is also easy to see that (9.7) holds for n = 1 ,
k = 0 , l . Now we use induction on n. Suppose that (9.7) is true for
n 5 N - 1 and consider P ( ( 2 N = 2 k ) for 1 5 k 5 N - 1. If C2N = 2k
102 CHAPTER 9

and 1 5 k 5 N - 1 then the sequence S1,S2, . . . , s 2 N has to contain both


positive and negative terms, and thus it contains a t least one term equal
to 0. Let p 1 = 21. Then either Sn > 0 for n < 21 and 5'21 = 0 or S, < 0 for
n < 21 and S2l = 0. Both possibilities have the probability
1
,P{p1 = 21) = -
2211 ( 211-- 21)
(cf. (9.2)).
NOWif Sn > 0 for n < 21 and 5'21 = 0, further if <2N = 2 k , then among
the numbers Szl+l, . . . , S 2 N there are 2k - 21 positive ones or zeros preceded
by a positive term, while in case S, < 0 for n < 21, S2l = 0 and <2N = 2 k ,
the number of such terms is 2k. Hence
k
1
P{<2N = 2 k ) = - p{pl = 2l}P{<2N-21 = 2k - 21)
2 1=1
~ N-k

and we obtain (9.7) by an elementary calculation.


It is worthwhile to mention that the distribution of the location of the
last zero up to 2 n , i.e. the distribution of

@(an)= max{k : 0 5 k 5 n, S2k =0)

agrees with the distribution of <Zn. In fact we have


THEOREM 9.9

P(Q(2n)= 2k} = (Y) (2; I:") 2-2n ( k = 0 , 1 , . . . ,n).

Proof. Clearly by Theorem 9.2

P ( Q ( 2 n ) = 2 k } = P { s 2 k = O}P{pl > 2 n - 2 k )
= P{S2k = O}P{S2n-2k = o}.

Hence the Theorem.


The distribution of the location of the maximum also agrees with those
of Cn and Q ( n ) . In fact we have
THEOREM 9.10 Let
p + ( n ) = inf{k : 0 5 k 5 n f o r which S ( k ) = M + ( n ) } .
DISTRIBUTION OF THE LOCAL TIME 103

Then

Proof. Clearly the number of paths for which

is equal to the number of paths for which

Hence

Then we obtain Theorem 9.10 by Theorem 2.4.

9.2 Limit distributions


Applying the above given exact distributions and the Stirling formula we
obtain

THEOREM 9.11

(9.8)

(9.9)

(9.10)
104 CHAPTER 9

= 61" e-U2/2du (x = f l , f 2 , . . .), (9.11)

arcsinfi (0 < x < 1). (9.12)


71-00

Remark 1. (9.12) is called arcsine law. It is worthwhile to mention that


by (9.12) we obtain

and

71-03

The exact distribution (9.7) of C2n also implies that the most improbable
value of 5211 is n and the most probable values are 0 and 2n. In other words,
with a big probability the particle spends a long time on the left-hand side
of the line and only a short time on the right-hand side or conversely but
it is very unlikely that it spends the same (or nearly the same) time on the
positive and on the negative side.

9.3 Definition and distribution of the


local time of a Wiener process
It is easy to see that the number of the time points before any given T ,
where a Wiener process W is equal to a given x, is 0 or 00 a.s., i.e. for any
T > 0 and any real x

#{t : 0 5 t < T , W ( t )= x} = 0 or 00 a.s.


Hence if we want t o characterize the amount of time till T which the Wiener
process spends in x (or nearby) then we have to find a more adequate
definition than the number of time points. P. Levy proposed the following
idea.
DISTRIBUTION OF T H E LOCAL T I M E 105

Let H ( A ,t ) be the occupation time of the Borel set A c EX1 by W ( . )in


the interval (0, t ) ,formally

H ( A , t ) = X{S : s 5 t , W ( S )E A }
where X is the Lebesgue measure.
For any fixed t > 0 and for almost all w E R the occupation time H ( A ,t )
is a measure on the Borel sets of the real line. Trotter (1958) proved
that this measure is absolutely continuous with respect to the Lebesgue
measure and its Radon-Nikodym derivate ~ ( xt ), is continuous in (x,t ) .
The stochastic process ~ ( zt ,) is called the local time of W . (It characterizes
the amount of time that the Wiener process W spends till t 'hear" t o the
point x.) Our first aim is to evaluate the distribution of the r.v. ~ ( 0t ), .
In fact we prove
THEOREM 9.12 For any x >0 and t >0

(9.13)

Proof. For any N = 1 , 2 , . . . define the sequence 0 < 71 = 71'"' < 72 =


TiN) < ... as follows:
71 = inf{t : t > 0, Iw(t)I= N - ' } ,
72 = inf{t : t > 7 1 , IW(t)- W(r1)I = N - ' } ,
...........
ri+1 = inf{t : t > ri, IW(t)- W(ri)l= N - ' } ,
...........
(cf. Skorohod embedding scheme, Section 6.4) and let

Sy)= W ( Q ) ( k = 1 , 2 , . . .),
p y x , n ) = # { k : 0 < k 5 n,SLN)= z},
v = v('") = max{i : ri 5 1).

Note that 7 1 , 72 - 71, ... is a sequence of i.i.d.r.v.'s with


Er1 = N - 2 and Erf < 03 (9.14)

(cf. (6.7)).
The interval (ri,~ i + will ~ ) be called type (a, b) ( a = j N - ' , Ib - a[=
N - ' , j = 0 , 1 , 2 , . . .) if IW(ri)l = a and IW(ri+l)l = b. The infinite random
106 CHAPTER 9

set of those i's for which ( ~ i ~, i + l is


) an interval of type (a,
b) will be denoted
by I ( N ) ( ab), = I ( a ,b). It is clear that IW(t)l can be smaller than N-' if
t is an element of an interval of type ( o , N - ' ) , (N-',O) or (N-',2N-').
Let
A = A ( N )= {i : 0 5 i 5 v, i E I ( 0 , N - ' ) U I(N-',O)}.
Then by the law of large numbers and (9.14)

N2X(7i+l
- Ti)
i€A
+1 a.s. ( N -+m). (9.15)
2 p ) (0, v)
(In fact (9.15) can be obtained using the "Method of high moments" of
Section 4.2, to obtain it by "Gap method" seems to be hard.)
Studying the local time of W ( . )in intervals of type (N-',2N-'), we
obtain
C(V(0 G,+ l ) - V ( 0 , T i ) ) = 0 8.5. (9.16)
i€B
for any N = 1,2,. . . where

B = B N = {i : 0 5 i. 5 v, i E I ( N - l , 2 N - 1 ) } . (9.17)

((9.16) follows from the simple fact that for almost all w E 0 there exists
an EO = E O ( W , N ) such that 1 W ( t )12 EO if t E U i E ~ ( ~~ii+, l ) . )
Hence
(9.18)

Then, taking into account that limN+m N-'vN = 1 a s . , (9.11), (9.15)


and (9.18) combined imply that

P{q(O,l) < z} = ElX e-u2/2du (9.19)

and Theorem 9.12 follows from (9.19) and from the simple transformation:
for any T > 0

{ ~ ( zt,T ) ,z E R1,0 5 t 5 l} 2 {T1/2q(zT-1/2, R' , O 5 t 5 l}.


t ) ,z E
(9.20)
Theorem 9.12 clearly implies that for any z E IW' we have

Levy (1948) also proved


DISTRIBUTION OF T H E LOCAL T I M E 107

THEOREM 9.13 For any x E R1,T > 0 and u > 0 we have


P ( q ( 2 , T )< u} = 2@ - (
u;;:‘) - 1.

To evaluate the distribution of q(t) = sup-,<,<, q(x,t ) is much harder.


This was done by Csaki and Foldes (1986). They proved
THEOREM 9.14

where 0 < jl < j 2 < . . . are the positive zeros (roots) of the Bessel function
Jo(x) = Io(iz) and for any k = 1 , 2 , . . .
4
ak = -
sin2j ,

1
JI(Z) = T I l ( i Z ) .
a
Furthermore

Remark 1. The proof of Theorem 9.14 is based on a result of Borodin


(1982)’ who evaluated the Laplace transform of the distribution of t-l/’q(t).
As we mentioned above the occupation time H ( A ,t ) is absolutely con-
tinuous with respect to the Lebesgue measure. Consequently

as. (9.21)

Hence (9.21) can be considered as a possible definition of the local time.


A number of results are known on the rate of convergence in (9.21). The
strongest one is
THEOREM 9.15 (Khoshnevisan, 1994). For all t > 0 with probability 1,

where q(t ) = supZGHp1


q(x,t).
108 CHAPTER 9

The local version of the above theorem is


THEOREM 9.16 (Khoshnevisan, 1994). For every t >0 and z E R1 we
have
lim sup
&-+O (& log log &-y
Chapter 10

Local Time and Invariance Principle

10.1 An invariance principle


The main result of this Section claims that the local time E(z,n) of a
random walk can be approximated by the local time ~ ( x , nof) a Wiener
process uniformly in x as n + co. In fact we have
THEOREM 10.1 (R&v&sz,1981). L e t { W ( t )t, 2 0) be a W i e n e r process
defined o n a probability space {R, F,P}. T h e n o n t h e s a m e probability space
(0,F,P} o n e can define a sequence X I ,X 2 , . . . of i.i.d.r.v. ’s with
P { X i = 1) = P { X i = -1) = 1/2 such that
, =o as.
lim n-1/4-Esup ~“z, n ) - ~ ( zn)l (10.1)
n+co X

for a n y E > 0 where t h e sup is t a k e n over all integers, 7 i s t h e local time of


W and E is t h e local t i m e of S, = X I + X 2 +...+ X,.
For the sake of simplicity, instead of (10.1) we prove only

lim “(0,
n--1/4--~ , = 0 a.s.
n ) - ~ ( 0n)I (10.2)
n+ 00

for any E > 0. The proof of (10.1) does not require any new idea. Only a
more tiresome calculation is needed.

Proof of (10.2). Define the r.v.’s 70 = 0 < 7 1 < 7 2 < . . . just like in
Section 6.3. Further let 1 < p1 < p 2 < . . . be the time-points where the
) } 0, i.e. let
random walk { S k } = { W ( T ~visits

p1 = min{k : k > 0, W(7-k)= Sk = 0},


p z = min{k : k > PI, W ( T ~=)sk = 0},
............
p, = min{k : k > pn-1, W ( Q )= Sk = 0},
............
Then
((0, n) = max{k : pk 5 n }

109
110 CHAPTER 10

and

v ( 0 , n )= c
E(O,k)

j=1
(V(O,TPj+l)- V ( 0 , T P j ) ) .

The proof of Theorem 9.12 implies

E(V(0, TPj+l) - d o , T P j ) ) = 1.
Hence

~ ( 0Q)
, = J(0, k ) + o ( ( J ( 0 , k))'I2+') as. (Ic + co).
(6.1) easily implies that

7-k = k -t 0 (k1/2+') a.S.

Then (10.2) easily follows from

J(0, k ) = o (k'/2+E) a.s. (Ic + co) (10.3)

and

sup ( q ( 0 , j + k 1 / 2 t E ) - q ( 0 , j ) ) = o (k1I4+') a.s. ( k + co). (10.4)


j5k

(10.3) and (10.4) can be easily proved. Their proofs are omitted here be-
cause more general results will be given in Chapter ll (Theorems 11.1 and
11.7).

Remark 1. Borodin (1986/A,B) proved that nEin (10.1) can be replaced


by logn, more than that M. Csorgo-Horvath (1989) proved that n' in (10.1)
can be replaced by (logn)1/2(log log n)lI4+€.

Remark 2. It turns out that the rate of convergence in Theorem 10.1 is


nearly the best possible. In fact Remark 3 of Section 11.5 implies that if
a Wiener process W ( . )and a random walk { S n } are defined on the same
probability space then

lim sup n-'/4 sup l<(x,n ) - q(x,n)l > o a s . (10.5)


n+cc 2

However, the answer to the following question is unknown. Assume that


a Wiener process and a random walk are defined on the same probability
space and
lim n-*IE(O,n)- ~ ( 0 , n )=l 0 a.s.
n+cc
L O C A L T I M E A N D I N V A R I A N C E PRINCIPLE 111

What can be said about a?

Remark 3. It can also be proved that in Theorem 10.1 the random walk
Sn and the Wiener process W ( t )can be constructed so that besides (10.1)

IS, - W(n)I = O(1ogn) a s .

Remark 4. A trivial consequence of Theorem 10.1 is

(10.6)

for any z > 0 where <(n)= max, <(x,n ) (cf. Theorem 9.14).

10.2 A theorem of Levy


Theorem 10.1 tells us that the properties of the process <(x,n ) are the same
(or more or less the same) as those of ~ ( xn ,) . In other words, studying the
behaviour of one of the processes [(x,n ) , ~ ( xn), we can automatically claim
that the behaviour of the other process is the same. The main results of
the present Section tell us that the properties of [(O, n ) (resp. ~ ( 0n,) ) are
the same as those of M + ( n ) (resp. m+(n)).Hence the theorems proved for
M + ( n ) (resp. m+(n))will be inherited by t(0, n ) (resp. ~ ( 0n,) ) .
Let
y ( t ) = m+(t)- W ( t )(t 2 0 )
and
~ ( n=)M + ( n ) - S(n) (n = 0 , 1 , 2 , . . .).
Then a celebrated result of P. L6vy reads as follows (see for example, Knight
(1981), Theorem 5.3.7):

THEOREM 10.2 We have

{ y ( t ) , m + ( t ) ;t 2 O ) ~ { l W ( t ) l , Q ( 0 , t )2; 01,

in other words, the finite dimensional distributions of the vector valued


process {y(t),m+(t); t 2 0 ) are equal to the corresponding distributions of
{IW(t)l,d o , t ) ; t 2 0).
In order to see the importance of this theorem, we mention that applying
the LIL of Khinchine (Section 4.4, see also Theorem 6.2) for m+(t)as a
trivial consequence of Theorem 10.2 we obtain
112 CHAPTER 10

Consequence 1.

(10.7)

In fact the Levy classes can also be obtained for ~ ( 0t ), .


Applying Theorem 10.1, Consequence 1 in turn implies

Consequence 2.

(10.8)

Remark 1. (10.7) was proved (directly) by Kesten (1965). (10.8) is due


t o Chung and Hunt (1949).
A natural question arises: what is the analogue of Theorem 10.2 in
the case of a random walk? In fact we ask: does Theorem 10.2 remain
true if we replace W ( t )y, ( t ) ,rn+(t), ~ ( 0t ), by S ( n ) ,Y (n), M + ( n ) ,t ( 0 ,n )
respectively? The answer to this question is negative, which can be seen
by comparing the distributions of [(O, 2n) and M+(2n) (cf. Theorems 2.4
and 9.3).
In spite of this disappointing fact we prove that Theorem 10.2 is “nearly
true” for random walks. In fact we have

THEOREM 10.3 (Csaki-Revksz, 1983). L e t XI,X2,. . . be a sequence of


i.i.d.r.u.’s w i t h P(X1 = 1) = P(X1 = -1) = 1 / 2 defined o n a probability
space {CI,F,P}. T h e n o n e c a n define a sequence X l , X 2 , : . . of i.i.d.r.u.’s
on t h e s a m e probability space s u c h t h a t P(X1 = 1) = P(X1 = -1) = 1/2
and for a n y E > 0
n-Ep-(n) - IS(n)lI + 0 U . S .
and
n-1/4-E I&+(n)- ‘(0,n)I -+ 0 U.S.,

where

Y ( n ) = A2+(n)- S ( n ) .

Remark 2. This theorem is a bit stronger than that of Cstiki-Rkvksz


(1983). The proof is presented below.
LOCAL T I M E A N D 1NVARIANCE PRINCIPLE 113

Remark 3. Consequence 2 can also be obtained by applying the LIL of


Khinchine (cf. Section 4.4) for M,' and Theorem 10.3.
Theorem 10.3 tells us that the vector (IS(n)l,<(O,n)) can be approxi-
mated by the vector ( p ( n )h;r+(n))
, in order n1I4+€.Unfortunately we do
not know what the best possible rate here is. However, we can show that
by considering the number of crossings O ( n ) instead of the number of roots
t(0, n ) , better rates can be achieved than that of Theorem 10.3. Let

O ( n )= # { k : 1 5 k 5 12, S ( k - l)S(lC+ 1) < 0)


be the number of crossings. Then we have
THEOREM 10.4 (CsBki-Rkvesz, 1983 and Simons, 1983). Let XI X Z ,. . .
be a sequence of i.i.d.r.v.'s with P(X1 = 1) = P(X1 = -1) = 1/2 de-
.
fined o n a probability space {O,.F,P}. T h e n one can define a sequence
X I , X Z , . . . of i.i.d.r.v.'s o n the same probability space ( 0 ,F ,P} such that
P{X~= I} = P { X ~= -1) = 1/2 and

1A2+(n)- 2@(n)l5 1, (10.9)


I%4 - IS(n)lI 5 2 1 (10.10)

f o r any n = 1 , 2 , . . . where

Y ( n ) = M + ( n ) - S(n).
Proof. Let

71 = min{i : i > 0, S ( i - 1)S(i 1) < 0},+


7-2 = min{i : i > q , S(i - 1)S(i 1) < 0}, +
...........
q+1 = min(i : i > 71, S ( i - 1)S(i+ I) < 0}, .
and
1Xj+1
-x if 1< j 5 71,
XlX,+l if 71 + 15 j 5 72,
xj= ............
(-1)"'X1xj+l if 71 + 15 j _< r1+1
............
This transformation was given by Cs&ki and Vincze (1961). The following
lemma is clearly true.
114 CHAPTER 10

LEMMA 10.1
(i) X l , X z , . . . is a sequence of i.i.d.r.v. with
P{X, = +1} = P{X, = -1} = 1/2.
(ii)

S ( k ) - 5(71)= ck

j=T1+1
x j = (-1)I+lX1 ck

j=T,+l
Xj+l =

51 if Tl+l<k<Tl+1-2,
(-l)’+’X1(s(k + 1) - s(Ti + 1))
{ =1
=2
if
if
k = 71+1 - 1,
k = Tl+1.

(iii) 2O(71) = 21 = S ( q ) = M + ( T ~ )1 ,= 1 , 2 , . . ..
(iv) For any <
n < ~ 1 + 1 we have O ( n ) = 1, 21 <_ af(n)5 21 + 1,
consequently o 5 A?+(.) - 20(n) 5 1.

(v)
S(k)=
{ 2121 ++ 21 -
-
IS(k + 1)1
IS(k)l
if
if
71 +
1 5 Ic
k=Ti+i,
5 q+1 - 1,

therefore
Y ( k )= rii+(k) - S ( k ) 5 IS(k + 1)1 5 IS(k)l + 1
and
Y ( k )= M + ( k ) - S ( k ) _> IS(k + 1)1 - 1 2 ]S(k)l- 2.
This proves Theorem 10.4.
P r o o f of Theorem 10.3. Clearly we have
n-1/4-E “(0, n ) - 2O(n)I + 0 a s . (10.11)
Hence we obtain Theorem 10.3 as a trivial consequence of Theorem 10.4.
Applying the Invariance Principle 1 (cf. Section 6.3), Theorems 10.2
and 10.4 as well as (10.11) we easily obtain

Consequence 3. (CsAki-Revksz, 1983). On a rich enough probability


space { R , F , P } one can define a Wiener process { W ( t ) ;t 2 0) and a
sequence Xl,XZ,. .. of i.i.d.r.v.’s with P(X1 = l} = P{X1 = -1) = 1/2
such that
IS(n) - W(n)I= O(1ogn) a s . , (10.12)
120(n) - ~ ( 0 , n )=J O(1ogn) a s . (10.13)
LOCAL T I M E A N D INVARIANCE PRINCIPLE 115

and for any E >0


l((0, n) - q(0,n ) ]= 0(721/4+E). (10.14)

Remark 4. Hence we obtain a new proof of Theorem 10.1 when only a


fixed x is considered.

Remark 5. Having Theorem 10.3, Theorem 10.2 can be easily deduced


(cf. Csski-Rkvitsz 1983, Simons 1983).

Question. Is it possible t o define two random walks {S?)} and { S g ) }on


a probability space such that
- 20(~)(n)1+ o a.s.
n-"l~$l)(o,n)
for some 0 < cy 5 1/4 (cf. (10.14)) where (("(0,n)is the local time of 5':)
and d2) is the number of crossings of S?'? If a positive answer can be
obtained, then in (10.11) a better rate can also be obtained. However, if
the answer is negative, then (10.11) also gives the best possible rate (except
that perhaps n" can be replaced by some log n power). Hence this question
is equivalent to the question of Remark 1 of Section 10.1.
Now we formulate another trivial consequence of Theorem 10.2.

Consequence 4. Let
F ( T )= max (W(u)- W ( v ) )
OLu<v<T

be the maximal fall of W ( . )in [O,T].Then


+
(1 E ) ( b ( T ) ) - l E UUC(F(T)),
(1 - &)(b(T))-lE ULC(F(T)),

(1+ E ) (c q 2
8 loglogn
E LUC(F(T)),

(1 - E ) (.._8>
loglogn 1/2 E LLC(F(T)),

{
P T-1/2F(T)< x} = G(z) = H ( x )
where G(.) and H ( . ) are defined in Theorem 2.13.

Proof of Consequence 4. Observe that F ( T ) = maxoltlT y(t) and apply


Theorems 10.2, 2.13, the LIL of Khinchine (Section 4.4) and the Other LIL
(5.9).
It is also interesting to study the properties of F ( T ) for those T's for
which W ( T )is very large. In fact we prove
116 CHAPTER 10

THEOREM 10.5 Let C1 and C2 be two positive constants. Then there


exists a sequence 0 < tl = tl(w;C1, C2) < t2 = t2(w;C1, C2) < . . . such
that
F(tn) L ~2 ( tn
log log tn
1/2
)and W(tn>2 Cl(b(tn))-l

7r2
c;+-
2c;
< 1.

The proof of the above theorem is based on the following:


THEOREM 10.6 (Mogul'skii, 1979).

P r o o f of Theorem 10.5. Observe that the conditional distribution of

{ W ( t ) ,O 5 t 5 T } given W ( T )= C ( b ( ~ ) ) - l
is equal t o the distribution of

{BT(t)+ Ct(Tb(T))-l,0 5 t 5 T } .
+
Further, the maximal fall of BT(t) Ct(Tb(T))-' is less than or equal t o
2 maxOct<T
- - J B T ( t ) )Hence
. we obtain

>
- (logT)-(c:+n2(1+')/2c~)

where BT(t) = W ( t )- t W ( T ) (0 5 t 5 T ) and the proof follows by the


usual way (cf. e.g. the proof of the LIL of Khinchine, Section 4.4).
Chapter 11

Strong Theorems of the Local Time

11.1 Strong theorems for c ( x , n ) and [ ( n )


The Recurrence Theorem (cf. Section 3.1) clearly implies that for any
x = 0, f l ,f 2 , . . .
lim t ( z , n )= 0;) a.s. (11.1)
n+oo

In order to get the rate of convergence in (11.1) it is enough to observe that


by Theorem 10.3 the limit behaviour of ( ( 0 , n ) (and consequently that of
[ ( x , n ) )is the same as that of M:. Hence by the EFKP LIL (cf. Section
5.2) and by the Theorem of Hirsch (cf. Section 5.3) we obtain

THEOREM 11.1 T h e nondecreasing f u n c t i o n

f ( n )E uuc (n-I%x, 4)
if and only if

T h e nonincreasing f u n c t i o n

g ( n ) E LLC ( n - l w x , n,)

if a n d only if

where x is a n arbitrary fixed integer.

Having Theorem 10.2 (instead of Theorem 10.3) and Theorem 6.2 or ap-
plying Theorem 11.1 and Theorem 10.1 we obtain

THEOREM 11.2 T h e o r e m 11.1r e m a i n s t r u e if w e replace <(., .) by q(., .).

117
118 CHAPTER 11

Remark 1. Theorem 11.1 was proved originally by Chung and Hunt


(1949). Theorem 11.2 is due to Kesten (1965).
The study of J(n) = max, J(x,n) is much harder than that of J(x,n).
However, having Theorem 9.14 (cf. also (10.6)) one can prove
THEOREM 11.3 (Kesten, 1965, Csaki-Foldes, 1986).
lim sup b(n)<(n)= limsup b(t)q(t)= 1 a s . (11.2)
n+w t+w

lim inf n-l/'(log log n)'I2[(n)= lim inf t-'/'(log log t)l/'q(t)
n+w t-iw
= y = 21/2jl a.s. (11.3)
where jl is the first positive root of the Bessel function Jo(x).

Remark 2. (11.2) is due to Kesten (1965). (11.3) is also due t o Kesten


without obtaining the exact value of y.
The result of CsAki and Foldes (1986) is much stronger than (11.3). In
fact they proved:
THEOREM 11.4 Let u(t) > 0 be a nonincreasing function such that
limt+, u ( t ) = 0, u ( t ) t 1 / 2is nondecreasing and limt+w u ( t ) t 1 / 2= 00. Then

u ( t ) E LLC (t-'/'q(t)) and u(n)E LLC (n-'l2J(n))

Remark 3. The proof of Theorem 11.4 is based on Theorem 9.14.


The upper classes of q ( t ) and those of <(n)were also described by Csaki
(1989). He proved
THEOREM 11.5 Let a ( t ) > 0 (t 2 1) be a nondecreasing function. Then

a(t)E uuc ( t - l / ' q ( t ) ) and a(.) E uuc ( n - 1 / 2 ~ ( n ) )

Since J(0, p n ) = n, i.e. pn is the inverse function of <(O, n), by Theorem


11.1 we can also obtain the Lkvy classes of pn. Here we present only the
simplest consequence.
STRONG THEOREMS OF T H E LOCAL T I M E 119

THEOREM 11.6 For any E > 0 we have


n2(logn)2+EE UUC(~,);
n2(logn)2-' E U L C ( ~ , ) ,

Perkins (1981/B) proposed to study the limit behaviour of

and he proved
THEOREM 11.7 There is a nonincreasing function @ ( a )such that

lim sup ??- ('.a


t+cO
(
(2t loglogt)l/2
210glogt)2) = @ ( a ) a.s.

for all a > 0.


It also looks interesting to study the properties of q ( x , p ; ) where p; =
inf{t : t 2 0 , q(0,t ) 2 r } . An analogue of Theorem 11.7 for stopping time
p; is

THEOREM 11.8 (Foldes, 1989). Let f ( x ) be a nondecreasing function


with limz+cOf(x) = 03. Then

77(X,
limsup inf ~ =1
T+cc IXl<Tf(T) 7-

if and only if

11.2 Increments of q(x,t)


First we give the analogue of Theorem 7.13.
THEOREM 11.9 (CsAki-Csorgo-Foldes-R6v6sz, 1983). Let at ( t 2 0 ) be
a nondecreasing function oft for which
120 CHAPTER 11

(i) 0 < at 5 t ,
(ii) t l a t is nondecreasing.
Then

lim sup 6t
t+m
sup
O ~ s ~ t - a t
( q ( z ,s + a t ) - q ( z ,s ) )
= Iim sup bt(q(z,t ) - q(z,t - a t ) ) = 1 a s .
t+m

If we also have
(iii)
lim (Iog(t/at))(loglogt)-‘ = 00
t+m

then
lim 6t
t--tm
sup
O<s<t-at
( q ( z ,s + a t ) - q ( z ,s)) = 1 as.

for any fixed z E R1 where 6t = a,1’2(log(t/at) + 210glogt)-l/~.


By Theorem 10.2 as a trivial consequence of the above Theorem we
obtain
THEOREM 11.10 Theorem 11.9 remains true replacing ~ ( 2t ),by m+(t).

Remark 1. Clearly

SUP
O<s<t-at
+
( m + ( ~a t ) - m+(s))5 SUP
O<s<t-at
+
( W ( S a t ) - W ( S ) ) . (11.4)

Comparing 6t and yt of Theorem 7.13 we obtain in (11.4) that for a sequence


t = t, fCCI we may have strict inequality whenever (iii) does not hold true.
The investigation of the largest possible increment in t when z is also
varying seems to be also interesting. We obtained
THEOREM 11.11 (Csaki-Csorgo”-Foldes-Rkv&z, 1983). Let at (t 2 0 )
be a nondecreasing function o f t satisfying conditions (i) and (ii) of Theorem
11.9. Then

limsupyt sup sup ( q ( z , s+ a t ) - q ( z ,s)) = 1 a.s.


t+m zEW’O<s<t-at

If we also assume that (iii) of Theorem 11.9 holds then

lim yt sup
t+m
sup
zERlO<s<t-at
( q ( z ,s + a t ) - q ( z ,s ) ) = 1 as.
S T R O N G T H E O R E M S OF T H E LOCAL T I M E 121

To find the analogue of Theorem 7.20 seems to be much more delicate.


At first we ask about the length of the longest zero-free interval. Let
r(t ) = sup{a : for which 30 < s s + u ) - ~(0,
< t - a such that ~(0, s) = 0}

be the length of the longest zero-free interval. Then we have


THEOREM 11.12 (Chung-ErdBs, 1952). Let f(x) be a nondecreasing
function for which f(x) /' 03 and x /f ( x ) 7co. Then

(
t 1 - - E UUC(T(t))
fttJ

Remark 2. Originally this theorem was formulated for random walk in-
stead of Wiener process.
Example 1. Since L ( f ) < 00 if f(x) = (logx)2fE(&> 0) and L ( f ) = 00 if
f(z)= (logx)2,we obtain

(1 - (logt)2+'
) E UUC(r(t))

and

or equivalently
liminf inf s + ut) - ~ ( 0s))
(~(0, , = liminf(q(0,t) - ~ ( 0
t-, at)) > 0
t+m O+-<t-at t+m

and
liminf
t+m
inf
Ojslt-at
+ , = lim
( ~ ( 0s, at) - ~ ( 0s)) inf(q(0, t ) - ~ ( 0t -
t+m
, ut)) = 0

This example shows that the study of the properties of the above lim inf (i.e.
the analogue of Theorem 7.20) is interesting only if at > t ( l - (logt)-*).
This question was studied by CsAki and Foldes (1986). They proved
122 CHAPTER 11

THEOREM 11.13 Let f l ( t ) = t(t - at)-' be a nondecreasing function


for which t / f l ( t ) /' 00 and limt+, f l ( t ) = 00. Further, let f 2 ( t ) be a non-
increasing function for which lirnt+- f 2 ( t ) = 0, t1/2 f Z ( t ) is nondecreasing
and limt+m tl/' f 2 ( t ) = 00. Then

if

Example 2. Let at = t ( l - (logt)-2-E)(E > 0). Then f l ( t ) = (logt)'+'


and L ( f 1 )< 00. Since

we obtain

Remark 3. By Theorems 10.1, 10.2 and 10.3, we find that the statement
of Example 2 remains true replacing ~ ( 0t ),by rn+(t) or M + ( n ) . (Compare
this result with the Theorem of Hirsch of Section 5.3.)
Finally we mention the following analogue of Theorem 11.11 (cf. also
Theorem 11.3).
THEOREM 11.14 (Cs&ki-Foldes, 1986). Let a t ( t 2 0 ) be a nondecreas-
ing function o f t satisfying conditions (i) and (ii) of Theorem 11.9. Then
liminf 19tQ(t)= 1 a s .
t+m

where
inf
Q ( t ) = O<s<t--at SUP
zE*l
( V ( 2 ,s + a t ) - V ( 2 ,s))
S T R O N G T H E O R E M S OF T H E LOCAL T I M E 123

and
+
19' = (
log(t/at) log log t
2j:at
If we also assume that (iii) of Theorem 11.9 holds then
Y2
lirn &&(t)= 1 a s .
t+cc

Remark 4. In case at = t we obtain (11.3) as a special case of Theorem


11.14.

Remark 5. The study of the increments of q(z,t ) in II: or in both variables


looks a challenging question.

11.3 Increments of ~ ( I C , n)
In Section 10.1 we have seen that the strong theorems valid for q ( z , t )
(resp. q ( t ) ) remain valid for ( ( z , n ) (resp. ( ( n ) )due to the Invariance
Principle (Theorem 10.1). In Section 7.3 we have seen that the strong
theorems proved for the increments of a Wiener process remain valid for
those of a random walk if a, >> logn (resp. a, >> ( l ~ g n ) depending~) on
what kind of theorems we are talking about. This latter fact is due to the
Invariance Principle 1 (Section 6.3) and especially the rate O(1ogn) in it.
Since the rate in Theorem 10.1 is much worse (it is 0 ( 1 2 ' / ~ + ~ )only) we can
only claim (as a consequence of the Invariance Principle) that the results of
Section 11.2 remain valid for ((z, n) (instead of ~ ( zt ,) ) if a, 2 n 1 / 2 + EThe
.
case a, < n1/2+Erequires a separate study. This was done by CsAki and
Foldes (1984/C). They proved that Theorem 11.9 remain valid for ((z, n)
if a , >> logn. In fact they proved the following two theorems:

THEOREM 11.15 Let 0 < a, 5 n ( n = 1 , 2 , . . .) be an integer valued


nondecreasing sequence. Assume that a n / n is nonincreasing and

lim -an = X I .
n-m logn

Then
lim sup 6,
n+w
sup
Oskln-a,,
(((z, k + a,) - ( ( 5 ,k)) = 1 a.s.

If we also have
124 CHAPTER 11

then
lim S,
n+m
sup
Olksn-a,
( [ ( x ,k + a,) - [(x,k)) = 1 a s .

for any fixed x E Z1where

6, = an1/2 (log(nan1)+ 21og1ogn)-l'~.


THEOREM 11.16 Let c > 0. Then for any fixed x E Z1

lim max [ ( x ,k + [clog"]) - t ( X , k ) = a(c) U.S.


n+m O < k< n- [ clogn] [c 1%I .

where a ( c ) = 1 / 2 if c 5 (log2)-' and it is the only solution of the equation


1
- =(1-2a)log(l-2a)-2(1-a)log(l-a)
C

if c > (log2)-1.

Remark 1. The above theorem suggests the conjecture:

max
Ojkln-a,,
([(O, k + a,) - [(O, k)) =

for all but finitely many n provided that a, = o(1ogn).


Since the Invariance Principle 1 of Section 6.3 is valid with the rate
O(1og n ) , Theorem 11.10 implies
THEOREM 11.17 Theorem 11.15 remains true replacing [(s,n) by M+(n).
The analogue of Theorem 11.11 for [ ( x , n ) is unknown except if a, 2
n1/2+E.

The analogues of Theorems 11.12 and 11.13 can be obtained by the


Invariance Principle for [(x,n ) .

11.4 Strassen type theorems


Let
u t ( x ) = btq(0,X t ) (0 5 x 5 1,t > 0)
and
S T R O N G T H E O R E M S OF T H E LOCAL T I M E 125

( I ; = 0 , 1 , 2 , , . . ,n - 1; n = 1 , 2 , ,. .), We intend to characterize the limit


points of the sequence U,(z) and those of ut(x). Since U n ( z ) (0 5 x _< 1)
for any fixed n is a nondecreasing function, its limit points must also be
nondecreasing .
Definition. Let SM C S be the set of nondecreasing elements of S (cf.
Notations to the Strassen type theorems).
Then we formulate
THEOREM 11.18 (Csaki-Revksz, 1983). T h e sequence { U n ( z ) ; 0 _<
x 5 1) and the n e t { u t ( z ) ;0 5 5 5 l} are relatively compact in G ( 0 , l )
with probability 1 and the sets of their limit points are S M .

Proof. This result is a trivial consequence of Theorems 8.2 and 10.2.


Define the process p ( z n ) (0 _< IC 5 1; n = 1 , 2 , . . .) by p(xn) = PIC.if
x = Ic/n (k = 0 , 1 , 2 , .. . ,n ) and linear between k / n and (k + l ) / n . Then
taking into account that pn is the inverse of ((0, n ) , i.e. ((0, p n ) = n, we
obtain the following consequence of Theorem 11.18:
THEOREM 11.19 T h e set of limit points of the functions
{ 2 n - 2 ( l o g ~ o g n ) p ( z n ) o; 5 x 5 I } (n + m)
consists of those and only those functions f (x)f o r which f - ’ ( x ) E SM.

It is also interesting to characterize the sets of limit points of the se-


quences c(z,n) (resp. q(x,t)) when we consider them as functions of n
(resp. t ) and we choose a big but not too big 2. In fact the Other LIL (cf.
Section 5.3) tells us that
C(x,,n) = 0 resp. v ( z t , t )= 0 i.0. a.s.
if

Hence we consider the case when x is smaller than the above limits, i.e.
when ((., .) and v(., -) are strictly positive a.s. Now we formulate
THEOREM OF DONSKER AND VARADHAN (1977). In the
topology of C(-m, +m) the set of limit points of the functions
126 CHAPTER 11

resp.

consists of those and only those subprobability density functions f ( x ) f o r


which

Remark 1. Mueller (1983) gave a common generalization of the Theorem


of Donsker-Varadhan and that of Wichura (cf. Section 8.4).

11.5 Stability
Intuitively it is clear that <(x,n ) is close to <(y, n) if x is close t o y. This
Section is devoted to studying this problem.
THEOREM 11.20 (Csorgii-Rkvksz, 1985/A).

where k = f l , f 2 , . . ..
THEOREM 11.21 (CsorgB-Rkvksz, 1985/A).

Remark 1. Since for any x E Z1,

<(x,n) = <(o,n) i.0. a.s.


STRONG THEOREMS OF T H E LOCAL T I M E 127

the study of the liminf of I<(z,n)- J(O,n)(is not interesting. The limsup
properties of t(z, n) - [ ( O , n) follow trivially from Theorem 11.10.
Theorem 11.20 stated that <(z,n) is close to ((0,n) for any fixed z if n
is big enough. The next two Theorems claim that in a weaker sense ((z, n )
is nearly equal t o E(0, n ) in a long interval around 0.

THEOREM 11.22 (CsBki-Foldes, 1987). Put

Then
11.5)

and

THEOREM 11.23 (CsBki-Foldes, 1987). Put

Then
5
lim sup
n-+ooh 1 ( n ) < z < h z ( n )
jp- 11 = o
,301
as. if p > -2

and

where c is any positive constant.

Remark 2. The Theorem of Hirsch says that [(x,n)= 0 i.0. as. if


z 2 n1/2(logn)-1. Hence it is clear that (11.5) can be true only if g ( n ) >
n1/2(logn)-1. Theorem 11.22 tells us that g ( n ) must be smaller than this
trivial upper estimate. The behaviour of [ ( M + ( n )- j , n ) for small j will be
described in Theorem 12.27. It implies that <(M+(n)- j,n ) is much, much
smaller than <(O, n). Theorem 11.23 gives the longest interval, depending
on M + ( n ) and M - ( n ) , where [(z, n) is stable.
In order to prove Theorem 11.20 we present a few lemmas.
128 CHAPTER 11

LEMMA 11.1 Let

Then
Ea1 = 0 , Ea; = 4k - 2, (11.6)

(11.7)

(11.8)

and

(11.9)

Proof. (11.6) is a trivial consequence of Theorem 9.7. (11.7), (11.8) and


(11.9) follow from Theorems 2.9, 2.12 and the LIL of Khinchine of Section
4.4 respectively.
The following two lemmas are simple consequences of (11.9).
LEMMA 11.2 Let { p n } be a n y sequence of positive integer valued r.v.'s
with limn+03 pn = co a.s. Then
a l ( k )+ a 2 ( k ) + . . . +
lim sup (k) 2(2k - 1 ) 1 / 2 a.s.
71-03 (pn loglogpn)1/2
LEMMA 11.3 Let { u n } be a sequence of positive integer valued r.v.'s with
the following properties:

(i) limn+03 v, = oo a.s.


(ii) there exists a set Ro C R such that P{Ro} = 0 and for each w $! 00
and k = 1 , 2 , . . . there exists an n = n ( w , k ) for which v , ( , , k ) = k .
Then
S T R O N G T H E O R E M S OF T H E LOCAL T I M E 129

Utilizing Lemma 11.3 with vn = t(0,n) and the trivial inequality a1(k) +
a ~ ( k ) +. . + a ~ ( o , n ) ( kI) t(k,n)-t(O,n) I al(k)+az(k)+...+a<(o,n)+l(k)+
1, we obtain Theorem 11.20.
As far as the proof of Theorem 11.21 is concerned we only present a
proof of the statement

The other statements of Theorem 11.21 are proved along similar lines.
The proof of Theorem 11.21 is based on the following result of Dobrushin
(1955).

THEOREM 11.24

Remark 3. One can also prove that (cf. Theorem 14.1 and (14.17))

This fact together with Theorem 11.24 implies that the rate in the uniform
Invariance Principle (Theorem 10.1) cannot be true with rate n1l4.
Dobrushin also notes that if N1 and N2 are independent normal ( 0 , l )
r.v.'s then the density function g of JNlI1I2N2
is

g(y) = f L m e x p (-$ %)dz.


-

Hence Theorem 11.24 can be reformulated by saying that

11.10)
In fact this statement is not very surprising since on replacing n by
t ( 0 , n )and k by 1 in (11.7), intuitively it is clear that

al(1)+ a 2 ( 1 ) + . . + at(o,n)(l) t ( l , n ) - t(O,n) 3 N 2


N
co).
Jrn Jrn -$

(11.11)
To find an exact proof of (11.11) is not simple at all. We will study this
question in Chapter 14.
130 C H A P TE R 11

Also, by Theorem 9.12


.-‘14(5(0, n))1/23 /N111/2 ( n -+ co). (1 1.12)

Intuitively it is again clear (for an exact formulation see Chapter 14) that

[(l,n, - [(O, n, and ‘(” n, are asymptotically independent.


dW3 fi
(11.13)
Hence (11.11), (11.12) and (11.13) together imply (11.10). The proof of
Dobrushin is not based on this idea. Following his method, however, a
slightly stronger version of his Theorem 11.24 can be obtained.
THEOREM 11.25 Let {x,} be any sequence of positive numbers such
that x, = o(1ogn). Then

and

The following lemma describes some properties of the density function g(y).
Its proof requires only standard analytic methods, the details will be omit-
ted.
LEMMA 11.4
(i) There exists a positive constant C such that for a n y y E R’
3
(11.14)

[ia) For any E > 0 there exists a C = C ( E )> 0 such that

(iii) Let {a,} be a sequence of positive numbers with a, t co. Then for
any E > 0 there exist a C1 = C ~ ( E>) 0 and a C2 = C ~ ( E>) 0 such
that
STRONG THEOREMS OF T H E LOCAL TIME 131

By Theorem 11.25 and (iii) of Lemma 11.4 we have


L E M M A 11.5 For a n y E > 0 there exist a CI = GI(€) >0 and a C2 =
C ~ ( E>)0 such that

and

Now we prove
L E M M A 11.6

Proof. Let

By Lemma 11.5
(11.15)
Let j < k and consider

03 00
132 C H A P T E R 11

where

and

Now a simple but tedious calculation shows that for any E > 0 there exists
a j , such that if j , < j < k , then

P{AjAk} 5 (1+ & ) P { A j ) P { A k ) . (11.16)

Here we omit the details of the proof of this fact, and sketch only the main
( j = 1,2,. . . , k - l),
idea behind it. Since ( n j / n k ) ' l 4 5 K1I4 the lower
limit of integration B(y) above is nearly equal to

Hence for latter y values the integral JZy) g(z)dz is nearly equal to P{Ak}.
Similarly, the integral J r g ( y ) d y gives P { A j } , and (11.16) follows, for in
the case of y > k1I4 the value of g(y) is very small.
STRONG THEOREMS OF T H E LOCAL TIME 133

Now ( 1 1.15), ( 1 1.16) and the Borel-Cantelli lemma combined give Lemma
11.6.
We also have
L E M M A 11.7 Let
mk = [exp(k/ log2 IC)]
and
Bk = (((0, (mk?k+l)) 2 ak+l)
where

T h e n of the events Bk only finitely m a n y occur with probability 1.

Proof. This lemma is an immediate consequence of Theorem 11.15.


L E M M A 11.8 Let

Mk+1 = ((2 f &)mk+l1% log mk+l)l/’


and

Dk = { SUP
L<Mk+l-ak+l
SUP
j<ak+l
IQi(1) f W + i ( l )f...fQi+j(l)l

T h e n of the events Dk only finitely m a n y occur with probability 1.

Proof. Cf. Theorem 7.13.


A simple consequence of Lemmas 11.7, 11.8 and Theorem 11.25 is
L E M M A 11.9 L e t Ek =

T h e n of the events Ek only finitely m a n y occur with probability 1.


LEMMA 11.10

(11.17)
134 CHAPTER 11

Proof. Let

Then by Lemma 11.5 only finitely many of the events Fk occur with prob-
ability 1. Now observing that

we have (11.17) by Lemma 11.9 and Lemma 11.10 is proved.


Also Lemmas 11.6 and 11.10 combined give Theorem 11.21.
The Theorems of the present Section (especially Theorem 11.20) suggest
that ][(i +
1,n) - [(i, n)l is about 2 ( [ ( i ,n ) ) l l 2for any i = 0, f l , . . . if n
is large enough. Consequently somebody is interested in the quadratic
variation
B
A = A ( n ; A , B )= C(E(i+ 1 , n ) - ( ( i , r ~ ) ) ~( A < B )
i=A

B
of the local time might think that A is about 2 C [ ( i , n ) . It is really so:
i=A

THEOREM 11.26 (Foldes-Rkv&z, 1993). For any E > 0 we have


B
lim N-3/4(log N)-'l2-" sup sup A(n;O,B)- 2 C < ( i , n ) = 0 a s .
N+CC l < n i N B>O i=O

Remark 4. In case B < N1l4 this Theorem is only a triviality.


Remark 5. Since

c [ ( i , n )= 71, (n = 1 , 2 , . . .)

as a consequence of the above Theorem we obtain


+m
C ([(i + 1,n) - [ ( i , n ) ) 2= 2n + ~ ( n ~ / ~ ( l o g n ) ~a.s./ ~ + ~ )
a=--03
Chapter 12

Excursions

12.1 On the distribution of the zeros of a


random walk
(9.11) and Theorem 11.1 are telling us in different forms that E(0,n) con-
verges to 0;) like nl/’, i.e. the particle during its first n steps visits the
origin practically n1/2 times. Clearly these n1l2 visits are distributed in
[0, n] in a very nonuniform way.
We have already met the Chung-Erd6s theorem (Theorem 11.12) and
the arcsine law (9.12) claiming that the zeros of {Sk)are very nonuniformly
distributed at least for some n. Now we give a few reformulations of the
Chung-Erd6s theorem in order t o see how it describes the nonuniformness
of the distribution of the zeros of {sk}.First a few notations:
(i) let

R(n)= max{k : k > 1 for which there exists a 0 < j <n-k


such that E ( 0 , j + k ) - ( ( 0 , j ) = 0)

be the length of the longest zero-free interval (longest excursion),

(ii) let

R(n) = max{k : k > 1 for which there exists a 0 < j <n-k


+
such that M f ( j k) = M f ( j ) )

be the length of the longest flat interval of Mk+ up to n,

(iii) let
q(n) = max{k : 1 < k 5 n, Sk = 0)
be the location of the last zero up to n,

<,
(iv) let be the number of those terms of S1, SZ,. . . , S, which are positive
or which are equal t o 0 but the preceding term of which is positive,

135
136 CHAPTER 12

(v) let
) inf{k : 0
p u + ( n= 5 k 5 n for which sk = M;}.

Now we can reformulate the Chung-ErdBs theorem (Theorem 11.12) as


follows:

THEOREM 12.1 Let f(x) be a nondecreasingf2lnctionfor which lim f(x)


X+CU
= co, x/f (z) i s nondecreasing and lim x / f ( x ) = co. Then
X+W

if and only if

where Y ( n ) is a n y of the processes R ( n ) , k(n),n - Q ( n ) ,Cn,n- p + ( n ) .

Proof. It is immediately clear that

UUC(R(n)) = UUC(n - Q((n))

and
UUC(k(n)) = UUC(n - p+(n)).
By Theorem 10.3 it is also clear that

UUC(R(n)) = UUC(k(n)).

As far as the process Cn is concerned, the inequality


UUC(<,) c UUC(n - 9 ( n ) )
is trivial. However, the equality is not quite clear but following the origi-
nal proof of Theorem 11.12 given by Chung and ErdBs (1952) we get the
required result.
The characterization of the lower classes of n - Q ( n )is trivial since we
have
9 ( n )= n i.0. a.s.
The characterization of the lower classes of Cn is also trivial. In fact as a
simple consequence of Theorem 12.1 we obtain
EXCURSIONS 137

THEOREM 12.2 Assume that f ( x ) satisfies the conditions of Theorem


12.1. Then
n
- E LLC((',)
f (n)
if and only if

The characterization of the lower classes of R(n) and k(n)is much harder.
We have
THEOREM 12.3 (CsBki-ErdBs-R6v&szI 1985). Let f ( x ) be a nonde-
creasing function for which

Then n
,B- E LLC(Y*(n))
f (n)
if and onlu if

where Y * is any of the processes R(n) and R(n) and ,B = 0,85403.. . is the
root of the equation
Pk
cO0

k=l
k!(2k - 1)
= 1.

Consequence 1.
log log n loglogn A

lim inf ___ R(n)= liminf ___ R(n)= ,B as.


n+cc n n+cc n
Besides studying the length of longest excursion R(n),it looks interesting to
say something about the second, third, . . . etc. longest excursions. Consider
the sample P I , P2 - PI , . . . 7 P((0,n) - P((O,n)-l I n - P<(O,n) (the lengths of
the excursions) and the corresponding ordered sample Rl(n) = R(n) 2
& ( n ) 2 . . . 2 RE(o,n)+l(n).Now we present
THEOREM 12.4 For any fixed k = 1 , 2 , . . . we have
E
log log n
lim inf Rj(n) = k,B a.s.
n
~

n+oo
j=1
138 CHAPTER 12

This theorem in some sense answers the question: How small can the r.v.'s
&(n),&(n), . . . be? In order to obtain a more complete description of
these r.v.'s we present the following:

Problem 1. Characterize the set of those nondecreasing functions f(n)


(n = 1 , 2 , . . .) for which

Theorem 12,l tells us that for some n nearly the whole random walk
{S(k)},"=, is one excursion. Theorem 12.3 tells us that for some n the
random walk consists of at least p-' loglogn excursions. These results
suggest the question: For what values of k = k(n) will the sum C,"=,
Rj(n)
be nearly equal t o n? In fact we formulate two questions:
Question 1. For any 0 < E < 1 let F ( E )be the set of those functions
f ( n )(n = 1 , 2 , .. .) for which

-x
f(n)

j=1
Rj(n) 2 n ( 1 - E)

with probability 1 except finitely many n. How can we characterize F(E)?


Question 2. Let F(o) be the set of those functions f ( n )( n = 1 , 2 , .. .) for
which

j=1

How can we characterize F(o)?


Studying the first question we have
THEOREM 12.5 (CsBki-ErdBs-Revesz, 1985). For any 0 < E < 1 there
exists a C = C ( E )> 0 such that
c log log n E F ( E ) .
Concerning Question 2, we have the following result:
THEOREM 12.6 For any C >0
c log log n $I
F(0)
and for a n y h ( n )/' m ( n -+ m)
h(n)loglogn E F(0).
EXCURSIONS 139

Knight (1986) was interested in the distribution of the duration of the


longest excursion of a Wiener process. In order to formulate his results
introduce the following notations: for arbitrary t > 0 we set
t o ( t ) = sup{s : s < t , W ( s )= O } ,
t l ( t )= inf{s : s > t , W ( s )= O},
d ( t ) = tl(t) - t o ( t ) ,
D ( t ) = sup{d(s) : to(s) < t } ,
E ( t ) = sup{d(s) : s < t , t l ( S ) < t } .
Then we call d ( t ) the duration of the excursion containing t. D ( t ) (resp.
E ( t ) )is the maximal duration of excursions starting by t (resp. ending by
t).
Knight evaluated completely the Laplace transforms of the distributions
of D ( t ) and E ( t ) and the distributions themselves over a finite interval. His
results run as follows:
THEOREM 12.7 (Knight, 1986).

P { y)< .}= 1 F (S) -

where
if YL1,

and
P { y < .) = 1- G (i)
where G(l) = 0 ,

and
2 1
G(2) = - - -
l r 2
The multiple Laplace transform of D ( t ) and some other characteristics
of a Wiener process were investigated by Cskki-Foldes (1988/A). A very
different characterization of the distribution of the zeros of {S,} is due to
Erdtjs and Taylor (1960/A), who proved
140 CHAPTER 12

THEOREM 12.8

lim - l Cpi112
n = r- 1 / 2 as.
n+a logn k = l

Remark 1. (9.8) and Theorem 11.6 claim that p k converges to infinity


like k 2 . However, these two results are also claiming that the fluctuation
of k-2pk can be and will be very large. Theorem 12.8, via investigating the
logarithmic density of p i i 2 , also tells us that p k behaves like k2.
Let us mention a result of Levy (1948) that is very similar to the above
theorem.
THEOREM 12.9

where

Remark 2. Theorems 12.1 and 12.2 imply that

l n
liminf -
n+co n
CI ( s k ) = o as.
k=l

and

Hence the sequence I ( S k ) does not have a density in the ordinary sense but
by Theorem 12.9 its logarithmic density is 1/2.
It is natural to ask what happens if in Theorem 12.9 the indicator
function I ( . ) of (-oo,O) is replaced by the indicator function of an arbitrary
Borel set of R1 . We obtain
THEOREM 12.10 (Brosamler, 1988; Fisher, 1987 and Schatte, 1989).
There is a P-null set N c R such that for all w $! N and for all Borel set
A C R1 with X(dA) = 0 we have
EXCURSIONS 141

where dA is the boundary of A and

1 if z E A,
IA(z) = { 0 if z $A.

For a Strassen type generalization of Theorem 12.10, cf. Brosamler (1988)


and Lacey-Philipp (1990).
For the sake of completeness we also mention
THEOREM 12.11 (Weigl, 1989).

where

A ( ~=) ~ - ~ / ~ ( 1 0 +
g (2y))2
i (0 < y < W)
and I ( . ) is defined in Theorem 12.9.

12.2 Local time and the number of long


excursions (Mesure du voisinage)
The definition of the local time of a Wiener process (cf. Section 9.3) is
extrinsic in the sense that one cannot recover the local time q ( 0 , T ) from
the random set AT = { t : 0 5 t 5 T,W ( t )= O}. Levy called attention to
the necessity of an intrinsic definition.
He proposed the following: Let N ( h , z , t ) be the number of excur-
sions of W ( . ) away from z that are greater than h in length and are
completed by time t. Then the “mesure du voisinage” of W at time t is
limhho hl/’N(h, z, t ) ,and the connection between q and N is given by the
following result of P. L6vy (cf. It6 and McKean 1965, p. 43).
THEOREM 12.12 For all real z and f o r all positive t we have

a.s.

Perkins (1981) proved that Theorem 12.12 holds uniformly in z and t.


Csorgo and Revesz (1986) proved a stronger version of Perkins’ result. Their
results can be summarized in the following four theorems.
142 CHAPTER 12

THEOREM 12.13 For any fixed t' > 0 we have

The connection between N and q is also investigated in the case when a


Wiener process through a long time t is observed and the number of long
(but much shorter than t ) excursions is considered. We have
THEOREM 12.14 For some 0 < (Y < 1 let 0 < at < ta ( t > 0) be a
nondecreasing function oft so that atlt is nonincreasing. Then

The proofs of Theorems 12.13 and 12.14 are based on two large deviation
type inequalities which are of interest on their own.
THEOREM 12.15 For any K > 0 and t' > 0 there exist a C = C ( K ,t') >
0 and a D = D ( K ,t') > 0 such that

{ (log h-1)-3/4
sup 1
h'l2N(h,z, t ) - E q ( z ,t ) 2 C } 5 DhK,
h1'4 (Z,t)€W'X [h,t']

where h < t'.


THEOREM 12.16 For any K > 0 there exist a C = C ( K )> 0 and a
D = D ( K ) > 0 such that

where 0 < at < t .


It is natural t o ask about the analogues of the above theorems for random
walk.
Clearly for any z = 0, f l ,5 2 , . . . the number of excursions away from
x completed by n is equal to the local time <(z,n),i.e.
M ( z ,n) = {the number of excursions away from x completed by n}
= max{i : pi(.) 5 n } = ( ( 2 , n ) .

Hence we consider the following problem: knowing the number of long


excursions (longer than a = a,) away from z completed by n,what can be
EXCURSIONS 143

said about E(z,n)? Let M ( a ,z, n) be the number of excursions away from
z longer than a and completed by n. Our main result says that observing
the sequence { M ( a , , ~ , n ) } r = with
~ some a, = [n"](0 < Q < 1/3) the
local time sequence { ( ( ~ , n ) } rcan = ~be relatively well estimated. In fact
we have

THEOREM 12.17 Let a, = [na]with 0 < a < 1/3. Then

where
P(a) = P(p1 > a } .
The proof of this theorem is based on

THEOREM 12.18 For any K > 0 there exist a C = C ( K ) > 0 and a


D = D ( K ) > 0 such that

where a, = [no](0 < a < 1/3).

Remark 1. Very likely Theorems 12.17 and 12.18 remain true assuming
only that 0 < Q < 1.
In order t o prove Theorem 12.18, first we prove the following

LEMMA 12.1 Let n and a be positive integers and C > 0. Then

provided that
Clog(nP(a))< nP(a)(l - P ( a ) ) . (12.1)
Proof. Clearly M ( a , z, p , ( z ) ) is binomially distributed with parameters
n and P ( a ) . Hence the Bernstein inequality (Theorem 2.3) easily implies
Lemma 12.1.
Proof of Theorem 12.18. Since by (9.10)
144 CHAPTER 12

condition (12.1) holds true if a 5 nP(p < 2) and n is large enough, Lemma
12.1 can be reformulated as follows: for any K > 0 and 0 < $ < p < 2
there exist a C = C ( $ ,p, K ) > 0 and D = D ( $ , p, K ) > 0 such that

P (12.2)

provided that n$ < a < nP


(12.2) in turn implies

(12.3)

+
and for any K > 0, 0 < < p < 2 and 0 < y < S < m there exist a
C = C(y,S,GI p, K ) and a D = D ( y ,S,4,p, K ) such that

(12.4)
Then by a slight generalization of (9.11) (or applying the exact distribution
of [ ( O , n) , cf. Theorem 9.3) for any K > 0 there exist a C = C ( K )> 0 and
a D = D ( K ) > 0 such that

(12.5)

for any x E Z1 or equivalently

P { E ( x , n ) 2 C(nlogn)'/2} 5 DnPK. (12.6)

Let m be a fixed positive integer and assume that the event

{ ~ [(XI m) 2 c ( mlogm)'/2}
A, = r n 5 (0 < p < 1/2)
holds true. Then replacing m by ,on(.) (more exactly assuming that E(x,m ) =
n , i.e. pn(.) 5 m < pn+l(x)) we obtain
EXCURSIONS 145

n2
<m n'/B.
2C2 log n -
Hence

(12.7)

Observe that if <(z, m) < mB then

Consequently
P{J>C,E(z,m) < r n P } = ~ (12.8)
if m is large enough and p < (1 - a)/4 . Hence by (12.8), (12.7) and (12.4)
we obtain
P{J > C } = P{J > C,A,} +P{J > C , < ( x , m ) < m'}
+P{J > C,[(z,m) > C(mlogm)1/2}
I P{J > c,A,} + ~ { < ( z , m> )C(mIogrn)1/2)
5 P { J > C,A m } + Dm PK 5 2Dm-K
if m is large enough, p < (1 - a)/4 and a / P < 2. P can be chosen in such
a way if 0 < CII < 113. Consequently we also have that
P { s u p J ( m , z ) > C } 5 Dm-K
X
146 CHAPTER 12

for any K > 0 if C, D are large enough and 0 < Q < 1/3. Hence the proof
of Theorem 12.18 is complete.
Theorem 12.17 is a trivial consequence of Theorem 12.18.
Note that if Q > 1/5 then P ( a ) can be replaced by (2/7ra)ll2. Hence
we also obtain
THEOREM 12.19 Let a, = [na]with 1/5 < a < 1/3. Then

THEOREM 12.20 For any K > 0 there exist a C = C ( K ) > 0 and a


D= D ( K ) > 0 such that

where a, = [na] (1/5 < Q < 1/3)

12.3 Local time and the number of


high excursions
The previous section gave a method to evaluate the local time of a Wiener
process resp. random walk in [ O , t ] hzving the number of long excursions
in this interval. A natural question is: can the local time be evaluated
having the number of high excursions. A positive answer of this question
was given by Khoshnevisan (1994). He succeeded to give a very exact rate
of convergence in this problem.
Let u,(x,t)be the total number of times before time t that W ( . )up-
+
crossed the interval [x,x E ] . Equivalently uE(z,t ) is the number of excur-
sions away from x up to t which are higher than E .
THEOREM 12.21 (Khoshnevisan, 1994) For every t > 0 and x E R1 we

and

The analogue question for random walk can be answered similarly ap-
plying the method of the previous section and Lemma 3.1 instead of (9.10).
Here we present only a weak form of the answer. Let M ( a , x , n ) be the
number of excursions away from x higher than a and completed by n.
EXCURSIONS 147

THEOREM 12.22 For any z = 0 , f l ,f2,.. .

provided that a, = n" (0 < Q < 1/2).

12.4 The local time of high excursions


Theorem 9.7 described the distribution of the local time J ( k , pl) of the ex-
cursion { S OS1,., . . , S p l } .Now we are interested in the properties of J ( k , p1)
when k is big, i.e. when k is close to M+(p1), the height of the excursion
{So,5'1,. . . , S p l } .We are especially interested in the limit distribution of
J(k,pl) when k is close t o M+(p1) = n and n + 00. First we present the
following

THEOREM 12.23 For any n = 1 , 2 , . . . and 1 = 1 , 2 , . . . we have

and if n + 00 then
1-1

P{J(%Pl) = 1 I M+(p1) = n } = -+ 2-1,

2n 2(n - 1).
-+ 2.
+
(n l ) z

Proof. By Lemma 3.1 the probability that the excursion { S OS1,. , . . ,S p l }


hits n is (2n)-', i.e. P{M+(pl) 2 n } = (2n)-l. The probability that after
the arrival time pl(n) the particle turns back but hits n once more before
arriving at 0 is 1/2(1- l / n ) . Hence the probability of having I - 1 negative
excursions away from n before p1 is (1/2(1 - l/n))'-'. Finally (2n)-l is
the probability that after 1 - 1 excursions the particle returns to 0.
In order to study the properties of E ( M + ( p l ) -j, pl), first we investigate
the distribution of J ( n / r + ( p l ) - j , p l ( M + ( p l ) ) ) . (Note that p l ( M + ( p l ) ) is
the first hitting of the level M + ( p l ) . )
148 CHAPTER 12

L E M M A 12.2 For any 1 = 1 , 2 , . . . , n = 1 , 2 , . . . , j = 1 , 2 , . . . , n - 1


PW+(Pl)2 12, E(n - j , Pl(n))= 0

P r o o f . By Lemma 3.1
1 1
- P { M + ( P l )2 n -j } .
2n-j
Further.

A))u
1-1-u

(; (1- (; (1- ;))


is the probability that after p l ( n - j ) the particle makes u negative ex-
cursions away from n - j (none of them reaches 0) and 1 - 1 - u positive
excursions away from n - j (none of them reaches n ) in a given order. Fi-
nally (2j)-l is the probability that after the 1 - 1 excursions the particle
goes to n.
L E M M A 1 2 . 3 F o r a n y n = 2 , 3 ,..., l = 2 , 3 , . . . , j = 1 , 2 ,...,n - 1
P{F(n - j , P l ) = 1 I M+(Pl) = n , l ( n , P l )= 11
EXCURSIONS 149

where U1 and U2 are i.i.d.r.v.’s with

n
P{U, = m } = ( i = 1 , 2 ; m = 1 , 2 ,...).
(12.9)
Further,

and

4j(n - j ) 2 j ( n - j ) - n
-
n n
+ 8j2- 4j (n + w).

Proof. Since

E(n - j , Pl) = <(n- j , P1 + (<(. - j ,P1) - E(. - j , Pl I)).( (12.10)

and by Lemma 12.2 the conditional distribution of <(n- j , p1 ( n ) )and that


of E(n - L P l ) - C(n - j , P l ( n ) ) (given {M+(Pl) = n,t(n,p1) = 1 1 ) are
equal to the distribution of U1, we obtain Lemma 12.3 realizing that the
two terms of the right-hand side of (12.10) are conditionally independent.

LEMMA 12.4 For any n = 2 , 3 , . , . , j = 1 , 2 , . . . , n - 1 and 1 = 0 , 1 , 2 , . . .

P{E(j,P 1 ) = 1, M+(Pl) I7% I x1 = 1)


[I-: 1
if 1 = 0,
- 3

n+l if 12 1,
\$(I- +
2 j ( n 1- j )
)l-l

(1- i> n
1 n f l
if 1 = 0,
150 CHAPTER 12

and if n -+ co then

-
- 4 n+l -n+l-j -+ 8 j - 6.
n( n + l), n

P r o o f is essentially the same as that of Lemma 12.2.


L E M M A 12.5 For any n = 2 ,3 ,. . ., j = 1 , 2 , . . . , n - 1, k = 1 , 2 , .. . and
1 = 2,3, . . .

P { l ( n - i P l ) = 1 I M+(P1) = n,l(n,p1) = k}
= P{Ul + v1 + v, + . . . + V k - 1 + u,= 1 )
where u1, V I ,v2,.. . , v k - 1 , U2 are independent r.v. 's with

n
P{Ui = m } = ( i = 1 , 2 ; m = l , 2 , ...)
2.i(n - j ) (1 - 2 j (n - j ) >,-l

4 ( k - 1) n
+ n(n - 1)2
(n - j ) 3 ( 2 j - ~

2(n - j )
--
n -1
-+ 8j2 - 4 j + (k - 1)(8j - 6 ) = 8 j 2 + (2k - 3)4j - 6 ( k - 1).
EXCURSIONS 151

Proof. Clearly

where the terms

and
t(n - j , P 1 ) - 5(. - j ,Pk (n))
are independent. Lemma 12.3 tells us that the conditional distribution of
[(a - j , p1 ( n ) )and ( ( n- j , P I ) - [(n- j , Pk ( n ) )is equal to the distribution
of Ul and U2. Lemma 12.4 is telling us that the conditional distribution
of [ ( n - j , p i + l ( n ) ) - [ ( n - j , p i ( n ) ) (i = 1 , 2 , . . . , Ic - 1) is equal to the
distribution of Vi,V2,.. . ,V k - 1 . Hence we have Lemma 12.5.
Theorem 12.21 and Lemmas 12.3, 12.4 and 12.5 combined imply

THEOREM 12.24 For any j = 1 , 2 , . . . , n - 1; n = 2 , 3 , .

2
-
n n+l +
n ( n 1)
( n+ I - j)' -+ 4j + 2,
E ( ( [ ( n- j,PI) I
- EE(n - j , ~ 1 ) )M'(p1)
~ = n ) -+ 8 j 2 + 4 j - 6.
Further, for any j = 0 ,1 ,2 ,. . . and K > 0 there exist a C1 = C1(K,j ) > 0
and a C2 = C z ( K , j )> 0 such that

p { [ ( n- j , P 1 ) > Cl logn 1 A f + ( P l ) = .} 5 C2n-K


and for any a > 0 and K > 0 there exist a C1 = C l ( a , K ) > 0 and a
CZ = CZ(CU,K ) > 0 such that

P { [ ( n- a l o g n , p l ) > ~1 log2 n 1 ~ + ( p l =) n } 5 C,n-K.

We also obtain the following:

Consequence 1. For any j = 0, 1,2, . . . ,n and n large enough we have

1
P {E(n- j , PI) L 6 j 2 + 4 j + 2 I M+(pi) = n } 5 4j2.
152 CHAPTER 12

Proof. By Chebyshev inequality and Theorem 12.22 we have

P{((n - j,pi) 2 X(Sj2 + 4 j - 6)l” + 4 j + 2 I M f ( p l ) = n} 5 -.A21


Taking X = 2 j and observing that

2j(Sj2 + 4j - 6)1/2+ 4 j + 2 5 6j2 + 4j + 2


we obtain the above inequality.

12.5 How many times can a random walk


reach its maximum?
Let x(n)be the number of those places where the maximum of the random
walk SO, 4 , .. . ,S, is reached, i.e. x(n) is the largest positive integer for
which there exists a sequence of integers 0 5 kl < k2 < . . . < kx(n) 5 n
such that
) . . . = S ( , Q ~ )=
S(k1) = S ( l ~ 2= ) M+(n). (12.11)
Csaki (personal communication) evaluated the exact distribution of x(n).
In fact he obtained

THEOREM 12.25 For any lc = 0 , 1 , 2 , .. . , [n/2]; n = 1 , 2 , .. . we have

P{x(n) 2 k + 1) = 2-”{M;-, 2 k}.

Proof. Consider the sequence

{ X ( h + 21, X(k1 + 31,. . . , X ( k Z ) ) ,


(X(lc2 + 2),X(k2 + 3), . . . , X ( k 3 ) ) , . . I
{X(lcx(n)-l f 211 x(kx(n)-l + 3)i * * * I x(kx(n))}~
{ X ( 1 ) 1 X ( 2 ) , . .. , X ( h ) } I
{ X ( k x ( n )+ 11, X&(n) + 211.. . ,X n )
+
where X(l)= X L = S(l 1) - S(l)and kl,k 2 , . . . ,kx(,) are defined by
+
(12.11). Let 5’; ( j = O , l , 2 , . . .,n- x(n) 1) be the sum of the first j
of the above given random variables in the given order. Then {S:} is a
+
random walk and x(n) 2 k 1 if and only if maxo<j5,-k - Sj.2 k which
implies the Theorem.
Now we prove a strong law.
EXCURSIONS 153

THEOREM 12.26
max x(k)
l<k<n
1
lim -
- - a.s.,
n-tm lg n 2
consequently
x (n > -
limsup - 1
- - a.s.
n-+m lgn 2
and trivially
x (n ) = 1 i.0. a.s.

Proof. Consider the sequence

P(taf= k} = 2-k ( I c = 1 , 2 , . . . ; 2 = 0 , 1 , . . .)

and the random variables 6: (i = 0 , 1 , . . .) are independent. Hence for any


L = 1 , 2 , . . . and K = 1 , 2 , . . . we have

P(max(2f
a<L 5 K } = (1 - ,>,+,. (12.12)

Choosing
1-& n1/2
K=Kn=- lg n and L = L n = -
2 (kn)
we obtain

if n is large enough. Hence


max
iln'/z(lg n)-2
taf 1-&
lim inf 2 T a.s.
n+cc kn
154 CHAPTER 12

Since M L 2 n 1 / 2 ( l g n ) - 2 a.s. (cf. Theorem of Hirsch, Section 5.3) for all


but finitely many n we get

ye;x(i) 1
lim inf 2 - a.s.
lgn 2
~

n-m

Similarly, choosing

K = K n = -I + & lgn and L = L , = n 1 l 2 l g n


2
we get

Let n j ( T ) = jT. Then we have

maxJa* 5 K a.s.
a<L

) L = L,,(T). Let j T
for all but finitely many j where K = K n j ( ~and I
+
N 5 ( j l ) T .Then

if N is large enough, which in turn implies the Theorem.


LEMMA 12.6 L e t

Mk = max{S(pk), S(PL + I), . . . , s(plc+l)}(k = 0 , 1 , . . . ,n - 1)


and let 0 5 M I : , I M2:n 5 . . . 5 Mn:n = M+(Pn) be t h e ordered sample
obtained f r o m t h e sample Mo, M I , .. . , M n - l . T h e n f o r a n y 0 < E < 1 we
have
Mn:n- Mn:n-l 2 nE a s .
f o r all but finitely m a n y n.

Proof. Let
EXCURSIONS 155

Observe that for any i fixed

and for any i , j , m with n(logn)-a 5 m 5 n(logn)", 0 5 i # j 5 n we


have

Hence

Let T be a positive integer with T(l - E ) > 1. Then only finitely many
of the events an^ will occur with probability 1. Let nT 5 N _< ( n l)T. +
Then
AN c A(n+l)T(2Q,E).
Consequently only finitely many of the events A, will occur with probability
one.
Since
M,:, 5 n (1 0 g n )~ a s .
n ( l ~ g n )5- ~
(cf. the LIL, the Other LIL and Theorem 11.6) we obtain the Lemma.
Lemma 12.6 and Theorem 12.22 combined imply
THEOREM 12.27 For any C > 0 there exists a D = D ( C ) > 0 such
that
sup E ( M +( n )- j , n ) 5 D log3 n U.S.
j<clog n
f o r all but finitely many n.
In this section as well as in Section 12.3 we investigated the local time
of big values. Many efforts were devoted to studying the local time of small
values. Here we mention only the following:
THEOREM 12.28 (Foldes-Puri, 1989). Let
/ j =~ min{k : (Ski = N )

and
E((--CYN,aN),fiN)= # { k : 0 5 k 5 fiN, ISklF a N } .
Then for any 0 < a 5 1 we have
(12.13)
156 CHAPTER 12

(12.14)

where c o ( a ) is the unique root of the equation


Q
utanu = -
1-Q

in the interval (0,7r/2].

Note that in case Q = 1, Q(Q) = 7r/2. Hence (12.13) (resp. (12.14)) are
equivalent with the Other LIL (resp. LIL of Khinchine).
Chapter 13

Frequently and Rarely Visited Sites

13.1 Favourite sites


The random set Fn = {x : [(x,n ) = ( ( n ) }will be called the set of favourite
points of the random walk { S ( n ) }at time n. The largest favourite points
will be denoted by fn = max{x : x E Fn}.
Of the properties of {fn} it is trivial that fn u(n) with probability
1 except for finitely many n if u ( n ) E UUC(S,). Hence we have a trivial
result saying that fn cannot be very large. The next theorem claims that
fn will occasionally be large.

THEOREM 13.1 (Erdos-Rev&, 1984). For a n y E >0

with probability 1 infinitely often.

Having this result, one can conjecture that fn will be larger than any func-
tion Z(n)i.0. with probability 1 if Z(n) E ULC(S,).
However, it is not the case. Conversely, we have

THEOREM 13.2 (ErdBs-R6vksz, 1984).

fn 5 ( 4 2 log, n + 3 log, n + 2 log, n + 2 log, n + 2 log, n))1’2


wath probability 1 except for finitely m a n y n.

It also looks interesting t o investigate the small favourite points. Let gn =


min{lzl : 2 E Fn}. Bass and Griffin (1985) proved that gn cannot be very
small. In fact

THEOREM 13.3

lim inf
Sn co if y > 11,

157
158 CHAPTER 13

Shi and T6th (2000) proposed the

Question. Find the value of the constant yo such that with probability 1

lim inf
0 if y < yo,
n+oo .1/2(1ogn)-r oo if y > yo.

They also remark that “there is a good reason to expect that yo would
be in [I, 21.’’
They also investigated the limit distribution of f n and proved

THEOREM 13.4

where U ( x ) is the distribution function of the random variable U defined by

V(U,1) = SUP V ( 2 , 1).


XEWl

The exact form of U ( x ) is unknown. However Shi and T6th formulated


the following:

Conjecture. There exists a constant v > 1 such that

Erdos-Revksz (1984) proposed to study the cardinality #IFnl of Fn.


Everyone can see immediately that #IFnl = 1 and #IFnl = 2 i.0. with
probability 1. This problem is still open. The strongest available result is

THEOREM 13.5 (T6th, 2001) Let

Then

(13.1)

(13.1) clearly implies

P(#(F*(2 4 i.0.) = 0.
FREQUENTLY A N D R A R E L Y VISITED SITES 159

Tdth and Werner (1997) proposed to investigate the problem of favourite


edges instead of favourite points. Let

si= si-1 + Si + 1
2
and let
q z ,n) = #{j E [l,n], sj = }.
be the local time of the edge [z - 1,XI. Now let

Kn = {z : l ( x , n )= ma?!(y,n)}
yEZ

be the set of favourite edges at time n and put

k ( r ) = #{i : i 2 1, #IKil 2r } .
THEOREM 13.6 (T6th-Werner, 1997) Almost surely k ( 4 ) < 00. More-
over Ek(4) < co.

We ask about the joint behaviour of fn and [(n). If f n and [ ( n )


were asymptotically independent, then one would expect that the limit
set of {fn/(2nloglogn)1/2, <(n)/(2nloglogn)1/2} should be the half-disc
+
{ ( x , y ) : y 2 0 , x 2 y 2 5 1). However, the correct answer shows that
things do not go exactly like this.

THEOREM 13.7 (CsBki-Rkvesz-Shi, 2000) With probability 1, the ran-


dom sequence

is relatively compact, whose limit set is identical to the triangle

We also study the jump sizes of favourite sites. Let l ( 0 ) = 0 and

THEOREM 13.8 (Csaki-Revksz-Shi, 2000) With probability 1,


160 CHAPTER 13

This theorem tells us that the extraordinarily large jumps of the favourite
site are asymptotically comparable to the size of the range of the random
walk.
Here we present a few unsolved problems (Erdos-Rkvksz, 1984 and
1987).

1. Consider the random sequence {u,} for which IFv,,/ 2 2. What can
we say about the sequence {v,}? Can we say, for example, that
limn-too v,/n = co with probability l ?

2. How can the properties of the sequence Ifn+l - fnl


be characterized?
Is it true that limsup,,, Ifn+l - fnl
= co? If yes, what is the rate
of convergence?

3. Let a(n) be the number of different favourite values up to n , i.e.


a(.) = #I Fkl. We guess that a ( n ) is very small, i.e. a(.) <
(logn)Cfor some c > 0, but we cannot prove it. Hence we ask: how
can one describe the limit behaviour of a(n)?

4. We also ask how long a point can stay as a favourite value, i t . let
1 5 i = i(n) < j = j ( n )5 n be two integers for which

and j - i = p ( n ) is as large as possible. The question is t o describe


the limit behaviour of p(n).

5. Further if 2 was a favourite value once, can it happen that the favourite
value moves away from x but later returns to x again, i.e. do sequences
a, < b, < c, of positive random integers exist such that

Fa,,&,, = 0 and Fa,Fc,, # 0 ( n = 1 , 2 , . . .)?

6. By the arcsine law we learned that the particle spends a long time
on one half of the line and only a short time on the other half with a
large probability. We ask whether the favourite value is located on the
same side where the particle has spent the long time. For example let
0 < n1 = n 1 ( w ) < 7 2 2 . . . be a random sequence of integers for which

j=1
F R E Q U E N T L Y A N D R A R E L Y V I S I T E D SITES 161

where
I ( S j )= 1 if Sj 2 0,
0 if Sj < 0.
Then we conjecture that fnk + 00 as k -+ 00 as.

13.2 Rarely visited sites


It is easy to see that for infinitely many n almost all paths assume every
;’ = 0 if ( ( 0 , n )#
value at least twice which they assume at all, i.e. let 6
r - 1 and 6;) = 1 if [(O,n)= r - 1 and let

f r ( n )= #{k : k # 0, ( ( k , n )= r } +
):6
be the number of points visited exactly r-times up to n. Then

However, for some n the number of points visited exactly once might be
very large.

THEOREM 13.9 (Major, 1988)

limsup fl ( n ) = C a s .
n+oo log 12
where 0 < C < co but its exact value is unknown.

Another interesting result on fi (n) is

THEOREM 13.10 (Newman, 1984).

Efl(n)=2 ( n = 1 , 2 ,...).

Proof. Since fl(1) = 2, we only prove that E f l ( n + 1) = Efl(n) for n > 0.


Consider the walk S; = Sk+l - XI (Ic = 0 , 1 , 2 , . . .) and let f ; ( n ) be the
number of points visited exactly once by S; up to n. Then

+
f;(n) 1 if ( ( 0 , n 1) = 0, +
fl(n + 1) =
{ f;(n) - 1 if [(O,n 1) = 1,
f; ( n ) if E(O,n + I) > 1.
+
Theorem 9.3 implies that P(((0, n + 1) = 0) = P(((0, n + 1) = l}. Hence
we have the Theorem.
162 CHAPTER 13

Let T

j=1

be the number of those visited points which are visited at most T times up
to n. Our question is: might be g T ( n ) = 0 i.0. (r 2 2).
This question was answered by T6th (1996/A).
THEOREM 13.11 For a n y positive integer n

P{g,(n) = 0 i . 0 . ) = 1. (13.2)

Clearly (13.2) tells us that for infinitely many n each visited point is
visited at least r times. We suggest the following problem: characterize
those sequences { r ( n ) ,n = 1 , 2 , . . .} for which (13.2) remains valid replacing
T by r ( n ) . It seems to be clear that (13.2) holds true if r(n)goes to infinity
slowly enough. It is not hard t o give an upper bound for the speed of the
sequences { r ( n ) }which might satisfy (13.2).
THEOREM 13.12 Let r(n)= alogn (a > 1/2). T h e n

9T(II.) >0 a.s.

for all but finitely m a n y n.

Proof. It is a trivial consequence of Theorem 12.26.


Chapter 14

An Embedding Theorem

14.1 On the Wiener sheet


Let {Xi,j, i = 1 , 2 , . . . , j = 1 , 2 , . . .} be a double array of i.i.d.r.v.’s with

1
P{Xi,.j = 1) = P{Xi,j = -1) = -
2
and define = S,,O = 0 ( n = 0 , 1 , 2 , . . .; m = 0 , 1 , 2 , . . .},

j=1 i=l

The arrays {Sn,m}and { X Q } are called random fields. Some properties


of {Sn,m}can be obtained as simple consequences of the corresponding
properties of the random walk, some properties of {Sn,m}are essentially
different. Here we mention one example of both types. Just like in the
one-dimensional case we have

However
lim sup b(nrn)s,,, = a.s.
n+cc
m+co

This latter result is due to Zimmermann (1972) (see also Csorg&Rkvksz,


1981).
On the same way as the Wiener process was defined (Section 6.2) a
continuous analogue of {Sn,m, n = 0 , 1 , 2 , . . . ; m = 0 , 1 , 2 , .. .} can be
defined. This continuous random field will be called Wiener sheet (two-
parameter Wiener process) and will be denoted by

Among the properties of the Wiener sheet we mention

163
164 CHAPTER 14

(i) W ( . ,.) is a Gaussian process,

(ii) W ( 0 ,y ) = W ( x ,0 ) = 0 ,

=~min(z1,xz)
(iii) E W ( ~ I , Y I ) W ( Q , Y ) min(yl,y2),
(iv) W ( x ,y) is continuous a.s.,

(v) for any zo > 0, the one-dimensional process { x 0 1 / 2 W ( x ~ , y )y, 2 0)


is a Wiener process,

(vi) for any yo > 0, the one-dimensional process { y ~ 1 / 2 W ( z , y ~x) 2


, 0)
is a Wiener process.

For some further study and a detailed definition of the Wiener sheet we
refer to Csorgo-R6v6sz (1981). A very useful and detailed new study of the
multiparameter processes is due to Khoshnevisan (2002).

14.2 The theorem


We have already seen that the study of the processes <(x,n) (resp. ~ ( zt)) ,
is relatively easy when x is fixed and we let only n (resp. t) vary. The main
reason of this fact is the following trivial:

LEMMA 14.1 For any integer x

E(<(x,P k ) - 6 ( x , P k - l ) ) = 1, E(J(2,P k ) - S ( x , P k - 1 ) - = 42 - 2

(cf. Theorem 9.7).

In order to formulate the analogue of Lemma 14.1 for q(., .) let

p{ = 0 , p; = inf{t : t 2 0, q ( 0 , t )2u } (u > 0). (14.1)

Then we have

LEMMA 14.2 For any x E R1 ,q(x,p t ) is a process of independent incre-


ments in u(u 2 0), i.e. for any 0 5 u1 < u2 < . . . < uk (k = 1 , 2 , . . .), the
r.v.
A N EMBEDDING THEOREM 165

are independent with

E(77(x, P t j ) - v(x,PtjJ - uj +Uj-d2 =42bj - y-1)

where j = 1 , 2 , .. . ,k.

Consider the process

(14.2)

Then we have

(i)
EC(x,u) = 0, EC2(x,u)= 4x21,

(ii) { L ( x ,u ) ;2~ 2 0) is a strictly stationary process of independent incre-


ments in u for any z E R1.

One can also prove that


(iii) L ( z ,u ) has a finite moment generating function in a neighbourhood
of the origin.

By the Invariance Principle 2 (cf. Section 6.3) this fact easily implies that
for any x E R1 the process L ( x ,u ) can be approximated by a Wiener process
W,t(.)with rate O(logu), i.e.

IW,+(u)- L(x,u)l = O(l0gu).

Having a fixed z this result gives an important tool to describe the prop-
erties of L ( x ,u ) .
What can we say about C ( x , u ) when u is fixed and x is varying? It is
easy to prove that for any fixed u { L ( x , u ) , z2 0) has orthogonal incre-
ments and it is a martingale in z. This observation suggests the question:
Can the process C ( x ,u ) be approximated by a two-parameter Wiener
process?
Since by the LIL ~ ] ( z , u=) 0 a.s. if z 2 ((2 + ~ ) u l o g l o g u ) ~and
/~ u
is big enough, we have C ( x , u ) = -u for any z big enough. This clearly
shows that the structure of C(z, u ) is quite different from that of W ( x ,u )
whenever IC is big. Hence we modify the above question as follows:
Can the process C ( x , u ) be approximated by a Wiener sheet provided
that u is big but IC is not very big?
The answer t o this question is positive. In fact we have
166 CHAPTER 14

THEOREM 14.1 (Csaki-CsorgB-Foldes-R&&z, 1989). There exists a


probability space with
(i) a standard Wienerprocess ( W ( t ) , t >_ O}, its two-parameter local time
process { q ( x , t ) , x E R1,t > 0 ) and the inverse process p: of ~ ( 0 , t )
defined b y (14.1),
z 2 0 ,u 2 0 )
(ii) a two-time parameter Wiener process { W ( z ,u);
such that
sup lL(x,u)- 2 W ( x , u ) J= 0 (u('+~)/'-') a.s. (u+ co) (14.3)
OSzSAd

where L ( x , u ) is defined b y (14.2), A is an arbitrary positive constant and

0 5 S 5 71100, 0 <E < 1/72 - 817.


This theorem is certainly a useful tool for studying the properties of C ( x ,u)
, Unfortunately it does not say much about ~ ( 2t ),. However, we
or ~ ( xp:).
can continue Theorem 14.1 as follows:
THEOREM 14.2 On the probability space of Theorem 14.1 we can also
define a process 6: such that
V
{ X , 21 2 OI=M, u 2 01, (14.4)

Ip:, - =0 (u1518) U.S. (u+ co), (14.5)

{):, u 2 0 ) and { W ( x , u ) ;x 2 0, u 2 0) are independent. (14.6)


Having the process {G;, u 2 0) we can proceed as follows:
Define the local time process fj(0, t ) by

fj(O,ij:,) = u (u2 0).

By the continuity properties of ~ ( 0t ), (cf. Theorem 11.7) we have

lim IV(0, Pt) - d o , 6t)l = a.s.


u+m Ip; - #6;11/21ogu

Thus by Theorem 14.2 we conclude that the local time process {4(0, t )
t 2 0) has the following properties:

t f j ( 0 , t ) ; t 2 O)%(O,t); t 2 O), (14.7)

Ifj(0,t) - ~ ( 0 , t )isl small a.s. (t -+ co), (14.8)


A N EMBEDDING THEOREM 167

{7j(O, t ) ;t 2 0) and { W ( x , u )x; 2 0, u >_ 0} are independent. (14.9)

(14.7) (resp. (14.9)) follows immediately from (14.4) (resp. (14.6)). In


order t o see (14.8) it is enough to show that

is small, which in turn follows from the fact that

I77(07ic) - d O , P 2 l = 177(0,ij3 - 4

is small. Now (14.3), (14.8), and the continuity of W ( . ,.) imply

/q(x,t ) - q(0,t ) - 2W(x,q(0,t))l is small a s

where fj(0, t ) satisfies (14.8) and (14.9).


A precise version of the above sketched idea implies

THEOREM 14.3 (Cs&ki-Csorg6-Foldes-Rh%z, 1989). There exists a


probability space with

(i) a standard Wiener process { W ( t ) ;t 2 0) and its two-parameter local


time process { ~ ( xt ),; x E R1,t 2 0},

(ii) a two-time parameter Wiener process { W ( x ,u ) ; x _> 0, u >_ 0},

such that

)fj(O, t ) - q ( 0 ,t)l = 0 ( t 1 5 1 3 2 log2 t ) as. (t -+ m),

{7j(O, t ) ; t 2 0 ) and { W ( x ,u ) ; x 2 0, u 2 0 } are independent


where
A > 0, 0 5 6 < 71100, 0 < E < 1/72 - 617.
168 CHAPTER 14

14.3 Applications
In order to show how the above theorem can be used in the study of the
properties of v(.,.), first we list a few simple properties of the vector valued
process
~ , t ) ) ,t - 1 / 4 ~ ( 0t,) ) ;t 2 01
{ ( 2 ~ (G(O,
which can be obtained by standard methods of proof.
Namely for any x > 0 and t > 0 we have

(14.10)

(14.11)

(14.12)

where N1, N2 are independent normal ( 0 , l ) r.v.'s. Note that since q(0, t )
and W(x,u) are independent (cf. Theorem 14.3), (14.12) follows from
(14.10) and (14.11).
Also, for any x > 0, the set of limit points of

ut = w(x,7j(O,t)) (t )..
~

JZZG(0, t ) log log t

is the interval [-1,1] a s . The set of limit points of

V - G(0, t )
t - &Ei$$
is the interval [0,1] a.s. Applying the fact that the processes { i j ( O , t ) ; t 2 0)
and {W(x,u); x 2 0, u 2 0} are independent, we get that the set of limit
points of ( U t , & ) is the semidisc {(u,w) : w 2 0, u2 + w 2 5 1). The set of
limit points of

is the interval [0, 21/23-3/4] a s . for any x > 0, that is,

(14.13)H

For any K > 0 the usual LIL implies


A N EMBEDDING THEOREM 169

Applying again the independence of Ut and & we obtain

(14.14)

for any K > 0 and 6 2 0.


Consequently, by Theorem 14.3 and by (14.10), (14.11), (14.12), (14.13),
(14.14) respectively we obtain

(14.15)

t-1/2q(0,t) 2 IN21 (t > O), (14.16)


(14.17)

lim sup 77(Z’t) - ~ ( ‘ 7 ~ ) = 1 a.s. for any z > 0, (14.18)


t-im 2J2z77(O,t)loglogt

(14.19)

(14.20)

for any K > 0 and 0 5 6 < 71200.


For the direct proofs of (14.18) and (14.19) see Cshki-Foldes (1988/B).
This page intentionally left blank
Chapter 15

A Few Further Results

15.1 On the location of the maximum of a


random walk
Let p ( n ) ( n = 1 , 2 , . . .) be the location of the maximum of the absolute
value of a random walk { S n } ,i.e. p ( n ) is defined by

M(n)= ISkl = S M n ) ) and An) I 72. (15.1)

If there are more than one integer satisfying (15.1), then the smallest one
will be considered as p ( n ) . Since p ( n ) = n i.0. a s . , the characterization
of the upper classes of p(n) is trivial. In order to get some idea about the
lower classes we can argue as follows.
Since limn+m p(n) = M a.s. by the LIL for any E > 0

ISM.))l 5 (1+ E)(2P(n) lo~logP(n))1/2


with probability 1 if n is big enough. By the Other LIL for any E >0

consequently

and -
(15.2)

if n is big enough.
We ask:

Question 1. Can p ( n ) attain the lower bound of (15.2)? The answer is


negative. In fact we have

171
172 CHAPTER 15

THEOREM 15.1 (CsAki-Foldes-R6visz1 1987).


(loglog n)2 lr2
lim inf
n+cc n
p(n) = 4 as.

Now we formulate our


Question 2. If
lr2 n
P(n) 5 (1 + El- 4 for some E >0 (15.3)
(loglogn)2
then by the LIL

(15.4)

We ask: Can IS(p(n))l attain the upper bound of (15.4) if p ( n ) is as small


as possible, i.e. if (15.3) holds? The answer is negative again. In fact we
have
THEOREM 15.2 (Cs&ki-Foldes-R6v6sz1 1987). Let
ir2 n
n: n > 1, p ( n ) 5 (1+ E)- 4 (loglogn)2
Then f o r any 6 > 0 there exasts an E =~ ( 6>
) 0 such that
lr
(1 - 6)- < liminf
- 253,
log log n 1/2 lr
5 limsup (7 5 (1 + 6 ) 2 a.s.
M ( n ))
233,
This theorem roughly says that if p(n) N (n2/4)(n/(loglogn)2) (i.e. p ( n )
is as small as possible) then

Question 3. Intuitively it is clear that M ( n ) can be (and will be) small


if p ( n ) is small. Theorem 15.2 somewhat contradicts this feeling. It says
A FEW F U R T H E R RESULTS 173

that if p(n) is as small as possible, then M ( n ) will be small but not as


small as possible without having any condition about p ( n ) . It will be
(7r/2)(n/ l o g l o g n ) 1 / 2instead of (7r/&)(n/ loglogn)1/2which is the small-
est possible value of M ( n ) by the Other LIL. We ask: How small can p ( n )
be, if M ( n ) is as small as possible? The answer is:
THEOREM 15.3 (Csaki-Foldes-Rkv&z, 1987). For any L > 0 there
exists an E = E ( L )> 0 such that with probability 1 the inequalities

and
n
'
~ ( ~ 1 ' (log log n)2
cannot occur simultaneously if n is bag enough. However, if g ( n ) is a pos-
itive function with g ( n ) 7co then for almost all w e R and E > 0 there
exists a sequence 0 < n1 = n 1 ( w ,E ) < 712 = ~ z ( w , E<) . . . such that

Question 4. Instead of Question 3 one can ask: How big can p ( n ) be, if
M ( n ) is as small as possible? The answer to this question is unknown.
The following theorem gives a joint generalization of the above three
theorems. It also contains the LIL and the Other LIL (cf. Sections 4.4 and
5.3).
THEOREM 15.4 (Cs&ki-Foldes-R&visz, 1987). Let
(loglogn)2
a(n)= n CL(n),

5 y 5 5 1 / 2(1 + (1 - $ 2 ) 1 / 2 }

y2 7r2
={(x,y): z>o, y > o , -+-<1
2 2 8y2 -
174 CHAPTER 15

T h e n the set of limit points of the sequence (a(n), b(n)) as (n + 00) as K


with probability 1.

Remark 1. This theorem clearly does not imply that (a(n),b(n)) E K


or even (a(n),b(n)) belongs to a neighbourhood of K if n is big enough.
However, ( a ( n ) b(n))
, belongs to a somewhat larger set K, 3 K if n is big
enough. In fact we have

THEOREM 15.5 (Csaki-Foldes-R6v6sz1 1987). Let

y2 7r2
K E = { ( z l y ) : z > O , y > O , -22+ - <8y21 +
-~

T h e n for a n y E >0
(a(n), b ( n ) ) E K, as.

f o r ail but finitely m a n y n .

In order to formulate a simple consequence of Theorem 15.4 let R*(n) be


the length of the longest flat interval of { M ( k ) , O 5 k 5 n}, i.e. R*(n)is
the largest positive number for which there exists a positive integer a such
that
0 < a < a R*(n) < n +
and
M ( a ) = M ( a + R*(n)).
Then by Theorem 15.1 (or 15.4) we have

THEOREM 15.6 (Cs&ki-Fo1des-R6v6szl 1987).

(log log n)2 ?I


lim inf ( n - R*(n))= - a s .
n+cc n 4
Equivalently f o r a n y c >0
r2 n
n - (1 - E)- E UUC{R*(n)}
4 (loglogn)2

and
7r2 n
E ULC{R*(n)}.
- (' + T (log log n)2
As far as the lower classes of R*(n) are concerned we have
A F E W FURTHER RESULTS 175

THEOREM 15.7 (CsAki-Foldes-Rkvksz, 1987).


log log n
lim inf
n-ico
- n
R*(n)= p

where p is the root of the equation

O3 Pk =1
k l
k!(2k - 1)

(cf. Theorem 12.3). Equivalently for any E >0

and
n
(1 - &)D- E LLC{R*(n)}.
log log n

Remark 2. In Theorems 12.1 and 12.3 we investigated the length of the


longest flat interval of M + ( n ) . Comparing our results regarding the upper
classes we obtain the intuitively clear fact that the longest flat interval of
M + ( n ) can be (and will be) longer than that of M ( n ) . Comparing the
known results regarding the lower classes no difference can be obtained.
About the proofs of the above theorems we mention that they are based
on the following:
THEOREM 15.8 (Imhof, 1984). Let M ( t ) be the location of the maxi-
mum of a Wiener process and let ut(x,y) be the joint density of the process
( t - l M ( t ) ,t - 1 / 2 m ( t ) ) .Then

15.2 On the location of the last zero


Let q ( n )be the location of the last zero of a random walk { S k ,k 5 n } ,i.e.
q ( n )= max{k : O 5 k <_ n, Sk = 0 ) .
Theorem 12.1 claims that: if g ( n ) is a nondecreasing sequence of positive
numbers then
176 CHAPTER 15

if and only if
cm

n=l
n-l(g(n))-l/Z < oo.

Consequently for any E > 0


n n
E LLC(Q(n)) and -E LUC(Q(n)).
(log n ) 2 + E (1% .I2
Since Q ( n )= n i.0. a s . and Q ( n )5 n the description of the upper classes
of Q ( n )is trivial.
Here we wish t o investigate the properties of the sequence { Q ( n ) }for
those n's only for which Sn is very big or M ( n ) is very small. It looks very
likely that if S, is very big (e.g. S , 2 ( 2 n l o g l 0 g n ) ~ / ~then
) Q ( n )is very
small. In the next theorems it turns out that this conjecture is not exactly
true. In order to formulate our results, introduce the following notations.
Let f ( n ) = n1/2g(n)E ULC(S,) with g(n) f co. Define the infinite
random set of integers
2 = 2 ( f )= { n : s, 2 f ( n ) } .
Furthermore, let a ( n ) , p(n) be sequences of positive numbers satisfying
the following conditions:
a(.) is nonincreasing,
0 < .(n) < 1,
P(n) -1 0,
n 4 n ) f co, nP(n) f co.
Then we have
THEOREM 15.9 (CsBki-Grill, 1988).
ncu(n)E UUC(*(n), 12 E 2)
if a n d only if

Further,
nP(n) E LLC(Q(n), 72 E 2)
if a n d o n l y if
A F E W F U R T H E R RESULTS 177

Remark 1. na(n) E UUC(!&(n), n E 2) means that na(n) 2 * ( n )


a s . for all but finitely many such n for which n E 2. In other words
the inequalities S , 2 f(n ) and Q ( n ) 2 ncy(n) simultaneously hold with
probability 1 only for finitely many n.
In order to illuminate the meaning of the above theorem we present two
examples.

Example 1. Let f(n)= ( ( 2 - &)nloglogn)1/2(0 < E < 2). Then we


obtain that the inequalities

S, >_ ((2 - E)nloglogn)’/’ and *(TI) 2 E(1 + ~ ) n


2
hold with probability 1 only for finitely many n, However,
E
S, 2 ((2 - &)nloglogn)1/2 and Q ( n )2 -(1 - E). i.0. a.s.
2
The above two statements also follow from Strassen’s theorem (Section 8.1).
Further.

S, 1 ( ( 2 - &)nloglogn)’/2 and Q ( n )5 n(logn)-T i.0. a.s.

if and only if 77 5 E . Note the surprising fact that Q ( n )5 n(logn)-’ i.0.


a.s. but there are only finitely many n for which * ( n ) 5 n(logn)-l and
S, 2 ((3/2)n log log n)l/’ (say) simultaneously hold.
Example 2. Let f(n) = ( 2 n l o g l 0 g n ) ~ / ~Then
. we obtain that for any
E> 0 the inequalities

hold with probability 1 only for finitely many n. However,


3 log, n
S, 2 ( 2 n l 0 g l o g n ) ~ / ~and * ( n ) 1-- i.0. a s .
2 log, n
Further,

S, 2 ( 2 n l o g l 0 g n ) ~ / ~and %(TI) 5 n(loglogn)-T i.0. a s .

if and only if 77 5 4.
Now we turn to our second question, i.e. we intend to study the be-
haviour of @ for those n’s for which M ( n ) is very small (nearly equal to
mz1/2(810glogn)-1/2, cf. the Other LIL (5.9)). In this case we can expect
that Q ( n )is not very small. The next theorem shows that this feeling is
178 CHAPTER 15

true in some sense. In order to formulate it introduce the following nota-


tions. Let y(n) and 6(n)be sequences of positive numbers satisfying the
following conditions:

Then we have

THEOREM 15.10 (Grill, 1987/B).

depending on whether 13(y,6) < co or 13(y,6) = co where

Consequence. The limit points of the sequence

{ n-l q ( n ) 7r-1n11/2
, (8 log log n)'/'hf(n>}

are the set


( ( 2 , y ) : y2 2 4 - 32, 0 5 2 5 1).

Example 3. The inequalities

and

M ( n ) 5 (4 - 37 - &)1/27r (8lo;ogn) 'I2 (O < <


hold with probability 1 only for finitely many n, i.e. if

*(n)5 yn then M ( n ) 2 (4 - 3 7 - & ) l i 2 r


A F E W F U R T H E R RESULTS 179

for all but finitely many n. Similarly if


1 /2
M ( n ) 5 (4- 37 - &)WT
( 8 1 0 ~ 0 ng ) then Q ( n )2 yn as.

for all but finitely many n. This means that if M ( n ) is very small then
* ( n ) cannot be too small. For example, choosing (4- 37 - =1 6 +
we have: if

M ( n ) 5 (1 + 6)r ( 8 l o i o g n ) 1/2 then Q(n)2 (1- 6)n a.s.

for all but finitely many n.


Having the above result we formulate the following

Conjecture. For any 6 > 0 and for almost all w E R there exists a sequence
of integers 0 < n1 = n1(w, 6 ) < 712 = nz(w, 6) < . . . such that

+
M(72i) 5 (1 S ) r
8 log log ni
and Q(ni)= ni (i = 1 , 2 , . . .).

The proofs of the above two theorems of this paragraph are based on
the evaluation of the joint distribution of Q ( n )and S,. Here we present
such a result formulated to Wiener process.
THEOREM 15.11 (Cshki-Grill, 1988). Let x > 0 , 0 < y < 1. Then

where +(t) is the last zero of W ( s ) , 0 5 s 5 t, i.e.


$(t) = sup{s : 0 5 s 5 t , W ( s )= 0).

15.3 The Ornstein-Uhlenbeck process and a


theorem of Darling and Erd6s
Consider the Gaussian process ( V ( t ) = t-lI2W(t); 0 < t < m}. Then
E V ( t ) = 0, E V 2 ( t ) = 1 and E V ( t ) V ( s )= m,
s < t. The form of this
covariance function immediately suggests that, in order to get a stationary
Gaussian process out of V ( t ) ,we should consider
U a ( t ) = V ( e a t ) ,-KJ < t < +KJ ( a fixed > 0).
This latter process is a stationary Gaussian process, for EU,(t)U,(s) =
e--crlt--sl/2,and it is called Ornstein-Uhlenbeclc process. We will use the
notation U ( t ) = U2(t), and mention, without proof, the following:
180 CHAPTER 15

THEOREM 15.12 (Darling-Erdos, 1956).

lim P{ sup U ( t ) 5 u ( y , T ) } = exp(-e-Y), (15.5)


T-tw O<t<T

lim P{ sup lU(t)l 5 u(y,T)} = exp(-2e-Y), (15.6)


T-too OstST
where for any -m < y < 03
1
y+21ogT+-loglogT-
2

We also mention a large deviation type theorem of Qualls and Watanabe


(1972) (cf. also Bickel-Rosenblatt, 1973).
THEOREM 15.13 For any T > 0 we have

Applying their own invariance-principle method Darling and Erd6s (1956)


also proved
THEOREM 15.14

and
lim P{ max k-1/2
n+w ljk<n
I s k 15u(y,logn)} = exp(-2e-~)
f o r any -co < y < co.
_ - k-1/2Sk is given
A strong characterization of the behaviour of maxl<k<n
in the following:
THEOREM 15.15
max k-1/2Sk
l<k<n
lim =1 U.S.
Tl+w (2 log log n)1/2

Proof. The LIL of Khinchine implies that

max k-1/2Sk
I<ksn
lim sup <1 a.s.
n+oo (2loglogn)1/2 -
A F E W FURTHER RESULTS 181

Applying Theorem 5.3 we obtain that for any n big enough with probability
1 there exists a
n1-6n 5 Ic = K n ( W ) 5 n
with

such that
S, 2 b,’.
Hence

max -
s k
>-s, > J- 2 J2 loglognl-hn 2 (1 - E)J=
l<k<n - fi -
and we have Theorem 15.15.
It looks also interesting to study the limit behaviour of the sequence

We prove
THEOREM 15.16
T

where the exact value of K is unknown.

Proof. Let a, = [ ( l ~ g n ) (~a] > 1). Then by Theorem 7.13 (see also
Section 7.3) for any E > 0 we have

which proves the lower part of the Theorem.


In order to prove its upper part the following result of Hanson and Russo
(1983/A (5.11 a)) will be utilized:
If an = f ( n )logn with f ( n )700 then

Applying again Theorem 7.13 we obtain

5 limsup sup sup


sj+k - sj < 1 a s .
n+m O<j<n2logn<k<an -
182 CHAPTER 15

and clearly
S j + k - sj
limsup sup sup <1 as.
n+cc O<j<n 1<k<2logn d- -

Since f ( n ) may converge to infinity arbitrarily slowly we obtain the


Theorem by the Zero-One Law.
In the first Edition of this book on the value of K of Theorem 15.16 we
presented the Conjecture: K = 1.
Shao (1995) gave an affirmative answer of this Conjecture.
Let v(n) = v ( n ,S ) resp. v ( T ) = v(T,W ) be the smallest integer resp.
the smallest positive real number for which

resp.
T ) max
( V ( T ) ) - ' / ~ W ( Y (= ) t-1/2W(t).
l<t<T
It looks an interesting question to characterize the properties of v ( n ,S )
resp. v(T,W ) . Clearly
v ( n ,S ) = n resp. v(T,W ) = T i.0. a.s
On LLC(v(., .)) we have
THEOREM 15.17 For any E >0
exp((logn)'-') E L L C ( ~ (.)).
~,

Proof. By Theorem 15.15 and the LIL of Khinchine we have


(1 - &)(2loglogn)1/25 (v(n))-lmv(n) 5 (1+ &)(2loglogv(n))1/2,
which implies Theorem 15.17.
Remark 1. Since the Invariance Principle (cf. Section 6.2) only implies
that
'
max k-'l2sk - max t-1/2W(t)l5 O(I) as.,
l_<k_<n l_<t<n
we cannot get Theorem 15.14 from Theorem 15.12 by the Invariance Prin-
ciple. However, applying Theorem 15.17 we obtain

for any E > 0. Hence we obtain Theorem 15.14 via Theorem 15.12 and the
Invariance Principle.
Studying the strong behaviour of U ( t ) Qualls and Watanabe (1972)
proved
A F E W FURTHER RESULTS 183

THEOREM 15.18 For any E > 0

(2l0gT)ll2 + (2 + E)
log log T
(2 log T )1/2
E U U C ( sup U ( t ) )
OLtST

and
(210gT)1’2 + (: -‘) log log T
(210gT)1/2 E ULC( sup U ( t ) ) .
O_<t<T

15.4 A discrete version of the It8 formula


It6 (1942) defined and studied the so-called It6-integral

where f(.) is a continuously differentiable function. Here we do not give


the definition but we mention an important property of this integral, the
celebrated

ITO-FORMULA (Itq 1942).

(15.7)

In fact (15.7) is a special case of the so-called ItB-formula. Here we are


interested to find the analogue of (15.7) for random walk. In fact we prove
THEOREM 15.19 (Szabados, 1989). Let f ( k ) ( I c E Z’)be an arbitrary
function and define

Then for any n = 0,1,2,. . . we have

(15.8)
184 CHAPTER 15

Remark 1. The function g ( . ) can be considered as the discrete analogue


of the integral f(X)dX, ( f ( S i + l )- f(Si))/Xi+l is the natural discrete
version of J ' ( S i ) and Cy.l f(Si)Xi+l can be considered as the discrete
It6-integral.

Proof of Theorem 15.19. We get

(15.()

(In order to check (15.9)consider the cases corresponding to Si = 0, Si > 0,


Si < 0; Xi+l = 1, Xi-1 = -1 separately.) Summing up (15.9)from 0 t o n
we obtain (15.8).
Example 1. Let f ( t ) = t . Then by (15.7)we have

I" W"t)
) -- -
W ( s ) d W ( s=
t
2 2
(15.10)
( 15.10)

and by (15.8)

(15.11)

(15.10)and (15.11)completely agree.


Example 2. Let f ( t ) = t 2 . Then by (15.7)we have

W 2 ( S ) d W ( S )= -- (15.12)
(15.12)
3

and by (15.8)

cn

i=O
S?Xi+l = g(S,+,) - csi
n

i=O
-- Si+l - C S i sn+l.
Sn+1 - -
2 3
n
- F (15.13)
i=O
The term -sn+1/3 of (15.13) is not expected by (15.12). However, we
know that the orders of magnitude of the terms Cy=oS!Xi+l, s2+1/3
and Cy==,Si are n3I2,while that of Sn+l is n1I2 only.
Remark 2. Applying the Invariance Principle 1 (15.7)can be obtained as
a consequence of (15.8).
The celebrated Tanaka formula gives a representation of the local time
of a Wiener process via ItB-integral.
A F E W FURTHER RESULTS 185

TANAKA FORMULA (cf. McKean, 1969). For any x E R1 and t > 0


we have

Here we are interested in giving the discrete analogue of this formula. In


order t o do so instead of <(., .) we consider a slightly modified version of
the local time. Let

= #{k: 0 5 k < n ,
(*(x,n) s k = x},

Then we have
THEOREM 15.20 (Csorgo-Rkvksz, 1985). For any x E Z' and n =
1 , 2 ,. . .

(15.14)

Proof. Observe that

c
where
= { :'[:::;-1
if
if
x # 0,
x = 0.
The above three equations easily imply (15.14).
Remark 3. The Tanaka formula can be proved from (15.14) using Invari-
ance Principle 1.
This page intentionally left blank
Chapter 16

Summary of Part I

Exact Limit Upper Lower Strassen


distr. distr. classes classes type theorems
Sn Th. 2.1 Th.’s EFKP LIL Th. 5.1 Strassen’s
2.9, 2.10 Sect. 5.2 Th. 1.
Sect. 8.1
Mn Th. 2.6 Th. 2.13 EFKP LIL Th. of Th. 8.2;
Sect. 5.2 Chung Wichura’s
Sect. 5.3 Theorem
Sect. 8.4
M2- Th. 2.4 Th. 2.12 EFKP LIL Th. of Th. 8.2
Sect. 5.2 Hirsch
Sect. 5.3
[(O,n) Th. 9.3 (9.11) Th. 11.1 Th. 11.1 Th. 11.18
The limit behaviour of [(O, n ) is the same as
that of M Z by Th. 10.3
Pn Th. 9.3 (9.9) I Th. 11.6 I 1
Th. 11.6 Th. 11.19
Since [(O, p n ) = n a description of ((0, n) gives a
description of pn
[(x,n) Th. 9.4 The limit behaviour is Th. of
the same as that of Donsker-
<(O,n)for fixed z , n + co. Varadhan
Sect. 11.4
O(n) Th. 9.5 The limit behaviour is the same as that of
<(O,n)/2 (cf. (10.11)) or M z / 2 (cf. Th. 10.4).
t(n) Th. 9.14 Th. 11.5 Th. 11.4
(10.6)
Cn Th. 9.8 (9.12) Th. 12.1 Th. 12.2
@(n) Th. 9.9 (9.12) Trivial Th. 12.1

187
188 CHAPTER 16

Exact Limit Upper Lower Strassen


distr. distr. classes classes type theorems

Replacing Sn,Mn,M,+,[(O,n) ... by ~ ( t ) , r n ( t ) , r n + ( t ) , ~ (.O


. ., tre-
),
spectively the above-mentioned results remain true by the Invariance Prin-
ciple 1 (cf. Section 6.4) and Theorem 10.1, with the exception that there is
no immediate analogue of O ( n ) and the natural analogue of x ( n ) does not
have any interest.
In some cases we also investigated the joint behaviour of the r.v.’s of
the above table. A table for these results is

M,+ M i An) iD (n)


S, Th.’s 2.5, 2.6, Th.’s 2.5, 2.6, Th. 15.9
5.8 5.8
M7l Th.’s 15.4, 15.5 Th.’s 15.10,
15.8 15.11
M: Th.’s 5.5, 5.6

Clearly many of the results of Part I are not included in the above two
tables. For example, the results about increments, the rate of convergence
of Strassen-type theorems, the results on the stability of the local time, etc.
are missing from the above tables. A summary on the increments of the
Wiener process is given at the end of Section 7.2.
“While 10 or 11 dimensions doesn’t sound much
like the spacetime we experience, the idea was that
the other 6 or 7 dimensions are curled up so small
that we don’t notice them; we are only aware of
the remaining 4 large and nearly flat dimensions.”

(S. Hawking: The Universe in a Nutshell)

11. SIMPLE SYMMETRIC


RANDOM WALK IN Zd
This page intentionally left blank
Notations

1. Consider a random walk on the lattice Zd. This means that if the
moving particle is in x E Z d in the moment n, then at the moment
+
n 1 the particle can move with equal probabilities to any of the
2d neighbours of x independently of how the particle achieved x.
(The neighbours of an II: E Z d are those elements of Z d whose d - 1
coordinates coincide with the corresponding coordinates of x and one
coordinate differs by f l or -1.)
Let Sn = S ( n ) be the location of the particle after n steps (i.e. in the
momemt n ) and assume that SO= 0. Equivalently: Sn = X I X2 + +
. . . + X n ( n = 1 , 2 , .. .) where X I ,X 2 , . . . is a sequence of independent,
identically distributed random vectors with
1
P(X1 = e i } = P{X1 = - e i } = - (i = 1,2,. . . d)
2d
where e l , e 2 , . . . , e d are the orthogonal unit-vectors of Z d .

5. [ ( n )= max<(z,n).
XEZd

191
192 11. SIMPLE S Y M M E T R I C R A N D O M W A L K IN Zd

10. y = n+w lim <(O,n)= 0) = P{pl = a}= 1 - c


lim yzn = P{ n-03 42j
.
j=1

11. Let T, be the first hitting time of x, i.e. T, = min{i : i 2 1, Si = x}


with convention that T, = co if there is no i with Si = x. Let T = To.
In general, for a subset A of Z d , let TA denote the first time when
the random walk visits A i.e. TA = min{i : i > 0,Si E A}.
12. Let y(x) be the probability that the random walk never visits x i.e.
y(z) = P{T, = a}.

13. z, = P{T < T,}.


14. s, = P{T, < T}.
15. S(1)= {x : x E Z d , llxll = 1).
16. Let W ( t )= (Wl(t),WZ(t), . . . ,Wd(t)),where W l ( t ) , W z ( t .).,. , Wd(t)
are independent Wiener processes. Then the Rd valued process W ( t )
is called a d-dimensional Wiener process.

17. m(t)= max IIW(s)ll.


o<s<t
Chapter 17

The Recurrence Theorem


This chapter is devoted to proving the

RECURRENCE THEOREM ( P d y a , 1921).

P{Sn = 0 i.0.) = 1 if d 5 2,
0 if d > 2.
In the early sixties Gyorgy P6lya gave a talk in Budapest where he told
the story of this Theorem. He studied in the teens at the ETH (Federal
Polytechnical School) in Zurich, where he had a roommate. It happened
once that the roommate was visited by his fiancke. From politeness P6lya
left the room and went for a walk on a nearby mountain. After some time
he met the couple. Both the couple and P6lya continued their walks in
different directions. However, they met again. When it happened the third
time, P6lya had a bad feeling. The couple might think that he is spying on
them. Hence he asked himself what is the probability of such meetings if
both parties are walking randomly and independently. If this probability
is big, then P6lya might claim that he is innocent.
By a simple generalization of the Recurrence Theorem it is easy to see
that if the parties wander independently according t o the law of the random
walk then they meet infinitely often with probability 1.
This Theorem was proved for d = 1 in Section 3.1. Hence we concentrate
on the case d 2 2.

LEMMA 17.1 For any n = 1 , 2 , . . .; d = 1 , 2 , .. .

Proof is trivial by a combinatorial argument.


LEMMA 17.2 For any d = 1 , 2 , .. . as n -+co we have

193
194 CHAPTER 17

Further, in case G? =2

1
P{SZn = 0) = -
n7r
+0
Proof can be obtained by the Stirling formula.
For later reference we give the following analogues of Lemmas 17.1 and
17.2.
L E M M A 17.3 Let d = 2. Then

provided that
x + y = 0 (mod 2) and 1x1 + lyl 5 2n
where
k E A,(x, y) if a n d only if Ic =x (mod 2) and 1x1 5 Ic 5 2n - IyI.

Proof is trivial by a combinatorial argument.


L E M M A 17.4 (Erd6s-Taylor, 1960/A, (2.9) and (2.10)). Let d = 2 ,
x = ( 5 1 , ZZ) and x1 x2 +
0 (mod 2). Then

Proof can be obtained by the Stirling formula.


Similarly one can obtain
LEMMA 17.5 Let d 2 2, z= . .,
( ~ 1 , ~ 2 , . xd) and ~1 + * * * + xd 0
(mod 2). Then
THERECURRENCETHEOREM 195

A more precise version of Lemma 17.5 is the following:


L E M M A 17.6 (Lawler, 1991, Theorem 1.2.1). Let 2 = (21,22, . . . ,zd),
=
x1 + x2 + . . . + X d n (mod 2) and d 2 2. Then

where
R, (z) 5 min(O(n-(d+2)/2),~ ( X
l lJ J - ~ ~ - ~ / ~ ) ) .
Lemma 17.6 by a nontrivial calculation implies
L E M M A 17.7 (Lawler, 1991, Theorem 1.5.4). Let d 2 3. Then
M

where

where Wd is the volume of the unit ball in Rd.

..>
L E M M A 17.8 For any d 2 3 there exists a positive constant c d such that
c d f o(l>
P{Sn= z f o r some n } = P{J(x) = l} = Rd-2 (R+
where R = (IxI(and

J(x) =
0 if [(x,n) = 0 for every n = 0,1,2,. , . ,
1 otherwise.

Proof. Clearly
n
P{S, = x} = CP{Sk = x , s j # 2, j = 0 , 1 , 2 , .. . , k - l ) P { S n - k = 0)
k=O

and

c 00

n=O
P{S, = x)
c o n

n=O k=O
M 03

n=O k=O
196 CHAPTER 17

Since by Lemma 17.5

c00

n=O
P { s n = x} = (Kd + o ( l ) ) R 2 - d ( R m)

and by Lemma 17.2


00

C P { S n = 0) < 00,
n=O
we obtain
co
P { J ( z )= 1) = P { s k = z, sj # XI j = 0,1 , 2 , . . . ,k - 1}
k=O
zr (Cd +O(1))R"d.
Hence we have the Lemma.

Remark 1. The proof of Lemma 17.8 shows that


ad
cd= co

CP{sn= 0)
n=O

LEMMA 17.9 In c a s e d = 2
n n

(CP{S2k = )
'
0
k=l

Proof. Clearly
n n
T H ER ECURRENCETHEOREM 197

Hence we have Lemma 17.9.

LEMMA 17.10 Let

p = p ( d ) = P(3n: 2 1, S, = 0) ( d = 1,2, ...)

Then
< 1,
P{S, = 0 id.}=
{0
1
if
if
p
p = 1.

Proof is trivial.

Proof 1 of the Recurrence Theorem. In case d = 2 Lemma 17.9 and


Borel-Cantelli lemma 2* of Section 4.1 imply that

1
p 2 P{Sn = 0 i.0.) 2 -.
2

Hence by the zero-one law (cf. Section 3.2) we have the Recurrence Theorem
in case d = 2.
In case d 2 3 it follows from Lemma 17.2 and Borel-Cantelli lemma 1
(cf. Section 4.1).

Remark 2. Lemma 17.9 is also true in the case d = 1. (The proof


is essentially the same.) Hence we obtain a new proof of the recurrence
theorem in the case d = 1. The third proof (cf. Section 3.2) applied in case
d = 1 does not work in the case d 2 2. The idea of the first proof can be
applied in the case d = 2 but it requires hard work. The second proof can
be used without any difficulty.

Proof 2 of the Recurrence Theorem. Introduce the following nota-


tions:

Po = 1,

k=O
M

k=l
198 CHAPTER 17

Between the sequences ( p 2 k ) and { @ k ) one can easily see the following
relations:
(0) Po = 1,
(i) p 2 = q 2 9
(ii) p 4 = q4 + q 2 p 2 ,
+
(iii) p 6 = q6 $. q 4 p 2 q 2 P 4 ,
............
(k) p 2 k = q2k + q2k-2p2 + . . + q 2 P 2 k - 2 7
............
Multiplying the k-th equation by z 2 k and adding them up to infinity we
obtain
1
P ( z ) = P(z)Q(z) + 1 i.e. Q(z) = 1- -. (17.1)
P(z)
By Lemma 17.2 we obtain

i.e.

Since Q(l) = C& q 2 k is the probability that the particle eventually visits
the origin we obtain

Hence we have the theorem.


Remark 3. In the case d 2 3 the formula
P{Sn = 0 for some TI = I, 2 . . .}

(17.2)

k=O

is applicable to the evaluation of the probability that the particle returns


to the origin. In fact by Lemma 17.2 we have
THERECURRENCETHEOREM 199

where ~~~~

ple, in the case d = 3 one can obtain q2k cg, -


p2k can be numerically evaluated by Lemma 17.1. For exam-
0.35. In fact this method
is not easily applicable for concrete calculations. Griffin (1990) gave a bet-
ter version of it and evaluated the probability of recurrence for many values
of d. For example, for d = 3, 4, 20 he calculated that the probabilities of
recurrence are 0.340537, 0.193202, 0.026376. Note that in the case d = 20,
P(S2 = 0 ) = 0.025. Hence the probability that the particle returns to the
origin but not in the second step is 0.001376.
It also looks interesting to estimate the probability
n-1 O3

P(2n 5 p1 < W} = 1 - 7 - c q 2 j = P{[(0,2n-2) = 0) -7 = x q 2 k


j=1 k=n

that the particle returns to the origin but not in the first 2n - 2 steps. We
prove
LEMMA 17.11 For any d 2 3 and E > 0 we have

(17.3)

where C(d) is a small enough positive constant.

Proof. The upper part of (17.3) is trivial. In fact by Lemma 17.2 we have

= (1 + 2E)-
Now we turn to the proof of the lower part of (17.3). Clearly by Lemma
17.8 for any r > 0 we have

~ ( 2 _<n p1 < ca) 2 ~ ( 2 5n p1 < 00, ~ ~ 5 rn1l2)


2 ~ 1

= C ~ ( 2 5np1 < co, ~2~ = l ~ )


Ikl<rn'/2

=
C
I kl < r n 1 / 2
~ ( p 2l 2 n , ~2~ = k ) ~ { : ~j >
j 0, ~j =l ~ j
200 CHAPTER 17

Choose r so big that for any n >0


P { s ~ ,5 r n l / ' ) > 1 - Y-.2
Hence we have the lower part of (17.3).
Having Lemma 17.11 it is natural to study the properties of the tail
distribution of p t . We prove
LEMMA 17.12 Let d 23

P r o o f . Let e l , e2,. . . be the lengths of excursions of {Sk} away from 0 and


let
Et = max(e1, e2,. . . ,et).
Note that the elements of the sequence { e i } are infinity except finitely
many. Then

and by Lemma 17.11

Similarly by Lemma 17.11 we have


THERECURRENCETHEOREM 201

Hence we have Lemma 17.12.


It is hard to say anything (except some triviality) about the probability
y(x) (the probability that the random walk never visits x).
L E M M A 17.13 Let d 2 3. Then

Proof. The inequality y(x) < 1 follows from the Recurrence Theorem.
The other part of the inequality is implied by

1 - y(x) = P{3n : n 2 0, sn E S(1)+ x}(l - y) 5 1 - y.


The next Lemma tells us how the probabilities 2, and sx depend on -yx.
L E M M A 17.14
7
2,=1- (17.4)
1 - (1 - y(z))2'
sx = (1 - Y(Z))(l - zz), (17.5)
z, + s, = 1 - P{T = T, = m}
=I- Y (17.6)
2 - Y(X) '
P{<(O,W ) + <(z,m) = j } = (1- Z, - s,)(z, + s,)' (17.7)
( j = 0,1,2,. . .).

Proof. Let Z(A) denote the number of visits in the set A up to the first
return to 0, i.e.
T
Z(A) = c I { S n E A}
n= 1

where I { . } denotes the usual indicator function.


Let A(")= {O,x} (i.e. a two-points set) and observe that

if j = 0,
P{Z(A("))= j + 1, T < W} = s"-;,' if j > 0.
(17.8)

Summing up (17.8) we get


w
C P(Z(A("))= j + 1, T < m} = z, + -
1-
OX

2,
j=O

= P{T < m} = 1 - y. (17.9)


202 CHAPTER 17

On the other hand, one can easily see that

1 - y = P{T < m} = P{T < T,} +P{T > T,, T < m}


= P{T < T,} + P{T > T,}P{T, < m}
= z, + s z ( l - y(x)). (17.10)

Now (17.9) and (17.10) imply Lemma 17.14.


LEMMA 17.15

1 - rz(n) := P{T, < }. = 1 - Y(X) + nd/2--1’


0(1) (17.11)

z,(n) := P{T < min(n,T,)} = z, + nd/2--1,


0(1) ( 17.12)

s,(n) := P{T, < min(n,T)} = s, + nd/2--1,


0(1) ( 17.13)

where 0(1)is uniform in x.

Proof. It is a trivial consequence of Lemma 17.11.


Chapter 18

Wiener Process and Invariance Principle


Let WI(t), W2(t), . . . ,Wd(t)be a sequence of independent Wiener processes.
The Rd valued process W ( t ) = {Wl(t), Wz(t), . . . ,Wd(t)}is called a d-
dimensional Wiener process. We ask how the random walk S, can be
approximated by W ( t ) .The situation is very simple if d = 2.
Consider a new coordinate system with axes y = x, y = -x. In this
coordinate system

xn= { (2-1/2,2-1/2)
(-2-1/2, 2-1/2)
(-2-’12, -2-ll2)
(2-1/2, -2-l/’)
if
if
if
if
X,
X,
X,
X,
= ( 1 , O ) in the original system ,
= ( 0 , l ) in the original system ,
= (-1,O) in the original system
= (0, -1) in the original system
,
.
Observe that the coordinates of X, are independent r.v.’s in the new co-
ordinate system (it is not so in the old one); hence by Invariance Principle
1 of Section 6.3 there exist two independent Wiener processes Wl(t) and
W Z ( t such
) that
) 2 - 1 / 2 ~(1n )- SF)/= O(log n) as.

and
1 2 - ~ / ~ ~ 2 (-1 Sg)l
2 ) = O(1ogn) a s
where Sc),St)are the independent coordinates of S, in the new coordinate
system. Consequently we have
THEOREM 18.1 Let d = 2. Then on a rich enough probability space
{a,.T, P} one can define a Wiener process W ( , )E R2 and a random walk
S, E Z2 such that
/ISn- 2-1/2W(n)ll = O(l0gn) U.S.

In the case d > 2 the above idea does not work. Instead we define
K:) = #{IC
: I 5 IC 5 n, XI, = ei or - ei> (i = 1 , 2 , .. . , d )
where ei is the i-th unit vector in Zd. Then by the LIL we have

(18.1)

203
204 CHAPTER 18

for any E > 0 and for all but finitely many n.


Let S, = (Si”, Si”, . . . , Sid’) (where St)is the i-th coordinate of S,
in the original coordinate system). Then by Invariance Principle 1 there
exist independent Wiener processes b%‘l (.), WZ(.), . . . , Wd(’)such that

1st)- w~(Ic~))I
= O(1ogICC))= O(1ogn) as.

for any i = 1 , 2 , .. . ,d. By (18.1) and Theorem 7.13 we have

lWi(IC!p)- Wi ():I 5 0 (n1/4(loglogn)3/4) a.s.


Consequently we have

THEOREM 18.2 On a rich enough probability space { R , . F , P } one can


define a W i e n e r process W ( . )E Rd and a random walk S, E Z d such that

llSn - W (z)1 5 0 (n’/4(10glogn)3/4) a.s.

for any d = 1 , 2 , .

It is not hard to prove that

P{W(t) = O Lo.} = 1 if d =1

and
P{W(t) = 0 for any t > 0} = 0 if d 2 2.
Hence we can say that the Wiener process is not recurrent if d 2 2. How-
ever, it turns out that it is recurrent in a weaker sense if d = 2.

THEOREM 18.3 (see e.g. Knight, 1981, Th. 4.3.8). For a n y E >0 we
have
P{llW(t>ll 5 & Lo.} = 1 if d = 2, (18.2)

P{IIW(t)II 5 E i.0.) = 0 if d23 (18.3)


where L o . means that there exists a random sequence 0 < tl = tl(w,E) <
tz = t z ( w , & ) < . . . such that limn+m t , = 00 a s . and IlW(t,)II 5 E (n =
1 , 2 , .. .).

The proof of Theorem 18.3 is very simple having the following deep
lemma which is the analogue of Lemma 3.1.
W I E N E R PROCESS A N D INVARIANCE PRINCIPLE 205

LEMMA 18.1 (Knight, 1981, Theorem 4.3.8). Let 0 < a < b < c < co.
Then

=I
C-b
if d = 1,
c-a
loge - logb
if d = 2,
log c - log a
C2-d - b2-d
C2-d - a2- d
if d 2 3,

where B = {IlW(t)II= b }

Remark 1. Choosing a = r, b = R, c = co in Lemma 18.1 we obtain: for


any d 2 3 and IluIl = R 2 T we have

P { W ( t )E Q ( u , r ) for some t } = (i) d-2 (18.4)

where
Q ( u , T )= {z : z E R d , 112 - uII 5 T } .
(18.4) is an analogue of Lemma 17.8 for Wiener process.

Remark 2. (18.3) is equivalent to

lim IlW(t)ll = 03
t+cc
a.s. if d 2 3. (18.5)

The rate of convergence in (18.5) will be studied in Chapter 19.


In connection with (18.2) it is natural to ask how the set of those func-
tions Et can be characterized for which

(18.6)
This question was studied by Spitzer (1958), who proved

THEOREM 18.4 Let g ( t ) be a positive nonincreasing function. Then

g ( t ) t 1 I 2 E LLC(IIW(t))I) ( d = 2)

i f and only if C’&((lcllogg(k)l)-l < 03.


206 CHAPTER 18

R e m a r k 3. Theorem 18.4 implies

The proof of Theorem 18.4 is based on the following:

L E M M A 18.2 (Spitzer, 1958). For any 0 < tl < t 2 < 00 we have

Here we also mention a simple consequence of Theorems 2.12 and 2.13


(cf. also Theorem 6.3).

T H E O R E M 18.5 For any d = 1 , 2 , .. . and T > 0 we have


P { m ( T )> u T ~ / = / ~ )ZL -+ co
~ }O ( t ~ - ~ e - " ~as

and
P { ~ ( T<
) U T ~ /=
~ exp(-o(u-2))
) as u + 0.
Similarly for any d = 1 , 2 , .. . as N + co we have
P { M ( N ) > u N ~ / =~ exp(-O(u2))
} if u -+ 00 but u 5 N1l3

and

P { M ( N )< = exp(-0(u-2)) if u + 0 but u >_ N-ll3.


Chapter 19

The Law of Iterated Logarithm


At first we present the analogue of the LIL of Khinchine of Section 4.4.
THEOREM 19.1

(19.1)

and
(198.2)

where bt = (2t log log t)-1/2.

Proof. By the LIL of Khinchine we obtain

limsupbtIIW(t)J(2 1.
t--tm

In order to obtain the upper estimate assume that there exists an E >0
such that
limsupbtIIW(t)ll 2 1 + E a s .
t+m

(Zero-One Law (cf. Section 3.2)). For the sake of simplicity let d = 2
and define 4 k = karccos@, (k = 0 , 1 , 2 , .. . [2n(arccos@)-']) where @ =
+
(1 ~ / 2 ) ( 1 +E ) - ' . If btllW(t)ll 2 1 + E then there exists a Ic such that

b t l c o s b k ~ I ( t +sin$kW2(t)I
) > I + 5. (19.3)
2

Since cos+kWl(t) + sin4kW2(t) (t 2 0) is a Wiener process, (19.3) cannot


occur if t is large enough. Hence we have (19.1) in the case d = 2. The
proof of (19.1) for d 2 3 is essentially the same. (19.2) follows from (19.1)
by Theorem 18.2.
Applying the method of proof of Strassen's theorem 2 of Section 8.1 we
obtain the following stronger theorem:
THEOREM 19.2 T h e process {b,W(t), t 2 0 } i s relatively compact in
Rd with probability 1 and the set of its limit points i s

Cd = {. E Rd, )1. 5 l}.

207
208 CHAPTER 19

The real analogue of Strassen’s theorem can also be easily proved. It


goes like this:
THEOREM 19.3 The net { b t W ( z t ) , 0 5 z 5 1) is reZativeZy compact in
c(o,1)x . . . X c(0,1) = (c(0, and the set of its limit points is s d
where s d consists of those and only those Rd valued functions f ( x ) =
(fi(z),..., f d ( l c ) ) f o r which f i ( 0 ) = 0, fi(.)(i = 1 , 2 . . . , d ) is absolutely
continuous in [0,1] and cf, J;(f,!(z))’dx 5 1 .
We ask about the analogue of the EFKP LIL of Section 5 . 2 . It is trivial
to see that if a ( t ) E ULC{W(t)} in the case d = 1 then the same is true for
any d. However, the analogue statement for UUC{W(t)} is not true. As
an example, we mention that Consequence 1 of Section 5.2 tells us that in
the case d = 1 for any E > 0

S, 5 2n loglogn + -
( ( (a +E logloglogn ) ) ‘ I 2

for all but finitely many n. However, it turns out that in case d
as.

> 1 it is
not true. In fact for any d > 1

logloglogn i.0. a s .
+

2
Now we formulate the general
THEOREM 19.4 (Orey-Pruitt, 1973). Let a ( t ) be a nonnegative nonde-
creasing continuous function. T h e n f o r any d = 1 , 2 , . . .

(19.4)

Remark 1. The function

+
does not satisfy (19.4) if a2 = 2 , a3 = d 2 , a h = 2 for 4 5 k 5 n but it
does if a, is increased by E > 0 for any n 2 2.
THE LAW OF ITERATED LOGARITHM 209

It was already mentioned (Chapter 18, Remark 2 ) that


lim IlW(t)II = 00 a.s. if d 2 3. (19.5)
t+m

Now we are interested in the rate of convergence in (19.5). This rate is


called rate of escape. We present
THEOREM 19.5 (Dvoretzky-Erdds, 1950). Let a ( t ) be a nonincreasing,
nonnegative function. T h e n

t 1 / 2 a ( t )E LLC(llW(t)ll> ( d 2 3)
and
di2+) E L L C ( ~ ' / ~ ~ ~ S ,(Id(2
) 3)
i f and only if
co
< 00.
C(a(2"))"-2 (19.6)

Remark 2. The function

does not satisfy (19.6) if E 5 0, but it does if E > 0.


In case d = 2 we might ask for the analogue of Theorem of Chung of
Section 5.3, i.e. we are interested in the liminf properties of

This question seems t o be unsolved.


Theorems 19.4 and 19.5 together imply: there are infinitely m a n y n for
which
IlSnlI 2 d- 1/2b-1
n (19.7)
and for every n big enough
llSnll 2 n1/2(logn)-E-l/(d-2) (d 2 3, E > 0). (19.8)

Erdds and Taylor (1960/A) proved that if a particle is very far away
from the origin, i.e. (19.7) holds, then it may remain far away forever
( d 2 3). In fact we have the following:
THEOREM 19.6
P{kinf I l s k l l 2 d-'/'bi1 i.0.) = 1 (ci 2 3).
l n
This page intentionally left blank
Chapter 20

Local Time

20.1 [ ( o , n ) in Z2
The Recurrence Theorem of Chapter 17 clearly implies

Hence we study the limit properties of <(O,n)in the case d = 2.

THEOREM 20.1 (ErdBs-Taylor, 1960/A). Let d = 2. Then

lim P{t(O,n) < zlogn} = 1 - e--nx


n+cc

uniformly for 0 5 z < (logn)3/4 and

The proof of this theorem is based on the following:

LEMMA 20.1 (Dvoretzky-Erd6s 1950, Erd6s-Taylor 1960/A).


7r
P{Pl > n} = P{<(O,n)= O } = -+ 0((1ogn)-2) (d = 2).
log n

Proof. Counting the last return to the origin (cf. also 9. of Notations) we
have

(20.1)

where ((0,O) = 0. Since by Lemma 17.2

211
212 CHAPTER 20

we have
n
log n
zP{szk = 0) % 7. (20.2)
k=O

Since the sequence P{[(O,2n) = 0) is nonincreasing by (20.1) and (20.2)


we obtain
n-1
log n
2n - 2) = 01
1 2 P{E(O, C P{s2k = 0) M ~ { ( ( 0 , 2 n- 2) = 0)-
7r
k=O

and
(20.3)

Similarly for any 0 5 k 5 n by (20.1)

P{J(O,2n - 2k - 2) = O} + c
71-1

j=k+l
P{SZj = 0).

(20.4)
Thus, if k tends to infinity together with n,(20.4) yields

Taking k = n - [n/log n] we have

Hence we have the main term of Lemma 20.1. The remainder can be
obtained similarly but with a more tedious calculation.
Proof of Theorem 20.1. Let q = [zlogn] + 1 and p = [n/q].Then

p{[(oi 2 logn) 2 < n> 2


k=l
n Q

p{Pk - Pk-1 < [n/q]}

= (P(P1 < P } ) q = (1 - P{<(O,p)= 0))Q.

Assuming that z < logn(log, n)-'-' by Lemma 20.1 we obtain

P{<(O,n) 2 zlogn} 2 epx2(1+ o(1)) (n+ m). (20.5)


In fact we obtain that
LOCAL TIME 213

uniformly in IC for x < (logn)3/4.


In order to get an upper estimate, let Ek ( k = 1 , 2 , . . . ,q ) be the event
that precisely Ic of the variables pi - pi-1 (i = I, 2 , . . . ,q ) are greater than
or equal to n, while q - k of them are less than n. Then

Hence

= 1 - (I - ~ { p 3l n))Q2 1 - e-xz(l + 0(10gn)-l/~) (20.6)

by Lemma 20.1 uniformly in x for x < ( l ~ g n ) ~ / ~ .


(20.5) and (20.6) combined imply the first statement of Theorem 20.1.
The second statement can be obtained similarly observing that by (20.6)
and Lemma 20.1 for 5 = z, (n = 2 , 3 , . . .) we have

Note that Theorem 20.1 easily implies

THEOREM 20.2 Let d = 2. Then

lim P p n
n+m { < exp (:)} = exp(-7rz)

uniformly for o < .z < n3/7.


Clearly for any fixed x E Z 2 the limit properties of t ( x , n) are the same
as those of c(0,n). For example, Theorem 20.1 and Lemma 20.1 remain true
replacing ( ( 0 , n ) by ((x, n). However, if instead of a fixed x a sequence xn
(with IIx,ll + co) is considered, the situation will be completely different.
The following result gives some information about this case.

THEOREM 20.3 (Erdiis-Taylor, 1960/A). Let d = 2. Then

P{((x,n) = 0) =
214 CHAPTER 20

Proof. By (20.1) we have


n
(20.7)
i=O

and similarly

p{6(z,2n) = 0) + cn

k=l
p{s2k = x}YZn-Zk+Z = 1 (20.8)

provided that z = ( x 1 , x ~with


) x1 + x2 = 0 (mod 2). (20.7) and (20.8)
combined imply
n
p { [ ( x ,2n) = 0) - %n+2 = c ( p 2 k - p(s2k = z})TZn-Zk+Z.
k=l

Consequently for 1 < kl < k2 < n we get


P { l ( x ,2n) = O } - Y2n+2

k=l k=kl+l

+ cn

k=kz+l
(p2k - P { S 2 k = x}).

Now in the case 400 < 11x112 < n2/3 put kl = llxll 2 and k2 = [n4/5]then by
Lemmas 20.1, 17.2 and 17.4 we obtain

P { t ( x ,2n) = 01 - Y2n+2
LOCAL TIME 215

L 1% 11412+ O(1)
log n
Similarly, for 1 < k3 <n

k=l
Take

Then

hence we have Theorem 20.3 in the case 20 < ))z))< n113. The case
n116 < IIzII < n112/20can be treated similarly.
As a trivial consequence of Theorem 20.3 we prove
L E M M A 20.2 Let nk = [exp(ek'Ogk)].Then for any k big enough
3
P { c ( O , n f g k )- t(0,n.k)= o I~ j j; = 0 , 1 , 2 , . . ., n k ) 5-
logk'

Proof. Since IISn,II 5 n k , for any z E 2


' with 11x11 = n k we have
P { < ( O , n y k )- C$(O,nk)= o 1 sj; j = 0 , 1 , 2 , . . . , n k )
3
5 P { l ( z , n f g k - n k ) = 0 ) 5 logic.
The next theorem gives a complete description of the strong behaviour
of m n).
THEOREM 20.4 (Erdos-Taylor, 1960/A). Let d = 2 and let f(x) (resp.
g ( x ) ) be a decreasing (resp. increasing) function for which

f(.)loga: 700, 9(4(logx)-' LO.


216 CHAPTER 20

Then
7r-lg(n) logn E UUC(l(0,n ) ) (20.9)

if and only if
(20.10)

(20.11)

if and only if
Ji" f o d x
2 log x
< 00. (20.12)

Remark 1. The function

g(2) = log, z + 2 log, x + log, z + . . . + log, 2 + 7- logk+l 2

satisfies (20.10) if and only if T > 1. The function


f ( z ) = (loglogx)-1-'

satisfies (20.12) if and only if E > 0.


Proof of (20.9). Instead of (20.9) we prove the somewhat weaker state-
ment only: for any E > 0

(I +s).rr-l(logn)log,n E UUC(<(O,n)) (20.13)

and
(1 - &)K-l(logn) log, n E ULC(J(0,n)). (20.14)
Let n, = [exp((l + ~ / 3 ) ~ )Then
]. by Theorem 20.1

P {((o,n,) 2 (1 + 5) .rr-l(lognk) log, nk} = 0(k-1-"3),

and by Borel-Cantelli lemma

~ ( 0n,)
, < (1 + f) 7r-1 (lognk) log, n, a.s.

for all but finitely many k . Let nk 5 n 5 nk+l. Then

((0, n ) 5 ((0, %+l) I (1 + f).rr-l(lognk+d log, 72k+l

5 (1 + &)7r-l(logn)log, 72
LOCAL T I M E 217

if k is large enough. Hence we have (20.13).


Now we turn to the proof of (20.14). Let

nk = [exp(ek’Og’)] (k = 2 , 3 , . . .),

and

Then clearly F k C Ek+1 and by Theorem 20.1 and Lemma 20.2

and similarly for any j <k

Hence we obtain (20.14) by Borel-Cantelli lemma.


Proof of (20.11). Instead of (20.11) we prove the somewhat weaker state-
ment only: for any E > 0

(log n) (log log n)-1 E LUC ([( 0, n)) (20.15)

and
(logn)(loglogn)-l-~ E LLC(((0, n ) ) . (20.16)
Let nk = [exp(ek)]. Then by Theorem 20.1

and by Borel-Cantelli lemma

for all but finitely many k . Let 72k 5 n 5 nk+1. Then


218 CHAPTER 20

if Ic is large enough. Hence we have (20.16).


Now we turn to the proof of (20.15). For any T = 1 , 2 , . . . we have

P{pp - ~ ~ < ~e ~ p- ( C1r 2 ~=) )P{p2,-1 < exp(CrY))

Since the r.v.'s p p - p2?-1 ( r = 1 , 2 , . . .) are independent we obtain

P27 > P2' - p2,.-1 2 e x p ( C ~ 2 ~i.0.


) a.s.

and consequently
€,(O, e x p ( C ~ 2 ~5) )2T i.0. a.s.
~ ) C = log2. We get
Let n = e x p ( C ~ 2 with

log n
I(07n) 5 i.0. a s .

Hence the proof of Theorem 20.4 (in fact a slightly weaker version of it) is
complete.
Note that Theorem 20.4 easily implies
THEOREM 20.5 For any E >0
exp(n(logn)l+&)E UUC(pn),
exp(n(logn)'-&) E u ~ C ( p , ) ,

20.2 ( ( n )in zd
As we have seen (Recurrence Theorem, Chapter 17)

lim c(O,n) < 00 a.s. if d 2 3.


n+cc

Similarly for any fixed 2 E Zd

n+m
lim J(z,n ) < 00 a.s. if d 2 3.

However, it turns out that


LOCAL T I M E 219

THEOREM 20.6 For any d 2 1 we have


lim
n+m
t(n)= lim sup t ( z , n )= co a.s.
n+m X E Z d

Proof. Theorem 7.1 told us that the length 2, of the longest head run till
n is a.s. larger than or equal to (1- E ) log n/ log 2 for any E > 0 if n is large
enough. Similarly one can show that the sequence XI Xa,. . . Xn contains
a run el, -el el, -ell . . . el of size (1 - E ) log n/ log 2d. This implies that
I

which, in turn, implies Theorem 20.6.


A more exact result was obtained by ErdBs and Taylor (1960/A). They
proved
THEOREM 20.7 For a n y d 2 3

lim -
‘ ( n ) - ~d a.s. (20.17)
n+m logn
where

In the case d = 2

Erdiis and Taylor (1960/A) also formulated the following:

Conjecture. For d = 2
- 1
lim ~ -- as.
n-03 (logn)2 7r

This Conjecture was proved by Dembo-Peres-Rosen-Zeitouni (2001).


They also investigated the number of points which are visited nearly
(logn)2/7r times up to n. Let

They proved
THEOREM 20.8
220 CHAPTER 20

20.3 A few further results


First we give an analogue of Theorem 12.8.
THEOREM 20.9 (ErdBs-Taylor, 1960/A).
l n l -
1
us. if d = 2.
n ! L % k=l
&sG .rr
The next theorem is an analogue of Theorem 12.1.
THEOREM 20.10 (Erdos-Taylor, 1960/A). Let d = 2 , f ( n ) t 03 as
( n + M) and En be the event that the random walk S, does not return to
the origin between n and n f Then
P { E n i.0.) = 0 or 1
depending on whether the series
03
1
k=l

converges or diverges.
Now we turn to the analogue of Theorem 13.9.
THEOREM 20.11 (ErdBs-Taylor, 1960/A, Flatto, 1976.) Let Q k ( n ) be
the number of points visited exactly k times ( k = 1 , 2 , . . .) up to n. i.e.

Qk(n) =#{j : 0 5 j 5 n , t ( S j , n )= k } .
Then
Qk ( n )log2 n = 1
lim U.S. if d=2 (20.18)
n-+m $71.

and

lim -
Q k ( n ) = y2(1 - ~ ) ~ -U .lS . if d 2 3, k = 1 , 2 , .. .
n+m n
where y = y(d) is the probability that the path will neuer return to the
origin.

Remark 2. Observe that in case d = 2 the limit properties of Q k ( n )


do not depend on k (cf. (20.18)). An explanation of this surprising fact
can be found in Hamana (1997). Further properties of Q k ( n ) are studied
by Pitt (1974) and Hamana (1995, 1997). For example it is proved that
Q k ( n ) ( k = 1 , 2 , . . .) obeys the central limit theorem if d 2 5.
Chapter 21

The Range

21.1 The strong law of large numbers


Let R(n)be the number of different vectors among S1,Sz,. . . ,S,, i.e. R(n)
is the number of points visited by the particle during the first n steps. The
r.v.

will be called the range of S1, S2,. . . , S,. In the case d = 1 Theorem 5.7
essentially tells us that R(n) is going to infinity like n1j2. In the case
-
d = 2 Theorems 20.1 and 20.4 suggest that R(n) n(logn)-'. (Since any
fixed point is visited logn times the number of points visited a t all till n
is n(logn)-'. Clearly it is not a proof since some points are visited more
frequently (cf. Theorem 20.7) and some less frequently (cf. Theorem 20.11)
among the points visited a t all.) In fact we prove

THEOREM 21.1 (Dvoretzky-ErdGs, 1950).


n log log n
[ E+ O ( (logn)2 ) if d = 2,

ER(n) =

I n~ { n+cc
lim <(o,n) = 0} + 0(n1/2)
n~ { lim ~ ( 0n)
n+cc
, = O} + 0(log n)
n P { lim <(O,n)= O} + + O ( n 2 - d / 2 )
n+m
Pd

where P d (d = 5 , 6 , . . .) are positive constants and

n2log log n
if
if
if
d = 3, (21.1)
d = 4,
d 25

VarR(n) = ER2(n)- 5 ij d = 3,
i f d = 4,
if d 2 5.
(21.2)

22 1
222 CHAPTER 21

Further, the strong law of large numbers

lim -R(n) = 1 a.s. if d22 (21.3)


n-w ER(n)

holds.

Remark 1. Theorem 5.7 implies that the range does not satisfy the strong
law of large numbers in the case d = 1.
In case d = 2 resp. 3 (Jain-Pruitt, 1972/B resp. Bass-Kumagai, 2002)
for VarR(n) the following stronger results are known:

clnlogn 5 VarR(n) 5 c2nlogn if d=3

where c1 < c2 are positive constants.


The proof of (21.1) is based on the following:

LEMMA 21.1

P{Sn # Si for i = 1 , 2 , .. . ,n - 1) = P{<(O,n- 1) = 0) = 3;2. (21.4)

Remark 2. The left hand side of (21.4) is the probability that the n-th
step takes the path t o a new point.

P r o o f of Lemma 21.1.

Hence we have (21.4).


Let
1 if Sn#Si for i = 1 , 2 ,..., n - 1 ,
$n={ 0 otherwise.
Then
n

k=l
T H E RANGE 223

Consequently by Lemma 21.1


n n
E R ( n ) = E C $ k =CP{c$(O,k-I)=O).
k=l k=l

Hence (21.1) in the case d = 2 follows fronl Lemma 20.1, and in the case
d _> 3 it follows from Lemma 17.11.
In order to prove (21.2) we present two lemmas.
LEMMA 21.2 Let 15 m 5 n. Then

Proof.

LEMMA 21.3

VarR(n) 5 2ER(n)(ER(n- [n/2]) - ER(n) + ER([n/2])).

P r o o f . By Lemma 21.2
224 CHAPTER 21

Then (21.2) follows from Lemma 21.3 and (21.1). (21.3) can be obtained
by routine methods.
Donsker and Varadhan (1979) were interested in another property of
R(n). In fact they investigated the limit behaviour of Eexp(-vR(n)) (v >
0, n + m). They proved
THEOREM 21.2 For any Y > 0 and d = 2 , 3 , . . .
lim n - d / ( d + 2logEexp(-vR(n))
) = -Ic(v)
n+w

and (Yd is the lowest eigenvalue of -1/2A for the sphere of unit volume an
Rd with zero boundary values.

Remark 3. In the case d = 2 Theorem 21.1 claims that R(n) is typically


7rn/ log n.
However, Theorem 21.2 claims that E exp(-vR(n))
-
Hence we could expect that E exp(-vR(n)) exp( - m n / log n).
exp(-k(v)nl/’).
N

Comparing these two results it turns out that in the asymptotic behaviour
of Eexp(-vR(n)) the very small values of R(n)contribute most. This fact
is explained by the following:
L E M M A 21.4 For any v > 0 there exists a C, > 0 such that
E exp ( -vR(n )) 2 exp ( -C,nl/’).

Proof.

By Theorem 18.5

and we have Lemma 21.4


T H E RANGE 225

21.2 CLT, LIL and Invariance Principle


Having the strong law of large numbers (21.3) it looks natural to ask about
the CLT, LIL and Invariance Principle.
The central limit theorem was proved by Jain and Pruitt (1971, 1974)
in case d 2 3 and by Le Gall (1986) in case d = 2. It turns out that in case
d = 2 the limit distribution is not normal but it is exactly described.
The law of iterated logarithm for d 2 4 was proved again by Jain and
Pruitt (1972/A). The case d = 2 is settled by Bass and Kumagai (2002)
who proved
T H E O R E M 21.3 Let d = 2 . Then

for some c >0


The almost sure invariance principle for d 2 4 was proved by Hamana
(1998).
T H E O R E M 21.4 Assuming that d 2 4 and the probability space is rich
enough one can find a Wiener process W ( . )such that
R(n)- E R ( n )
n1/2 - W ( n )= 0(n2l5+') as.
(VarR(n)) / 2
fur any E > 0.
The case d = 3 is harder. This problem was solved by Bass and Kumagai
(2002). They proved the following surprising result.

T H E O R E M 21.5 Assuming that d = 3 and the probability space is rich


enough one can find a Wiener process W ( . )such that

where

r=P{S,#O, n = 1 , 2 ,...},
15
q= -.
32
Clearly this Theorem also implies the law of iterated logarithm for d = 3.
226 CHAPTER 21

21.3 Wiener sausage


Let W ( t )E Rd be a Wiener process. Consider the random set

where
K , = {X : llzll 5 T } .
B,(T) is called Wiener sausage. The most important results are summa-
rized in
THEOREM 21.6 (cf. Le Gall, 1988). For any r > 0 and d = 2 we have

a.s. (21.5)

If d 2 3 then
lim T-lX(B,(T)) = c(r,d) a s . (21.6)
T+CC

where c ( r , d ) is a positive valued known function of r and d.


Further,

lim P{Kd(T)(X(B,(T))- Ld(T))< x} = @d(az


T+w
+ p) (21.7)

where

ad is the normal law if d 2 3 and it as non-normal if d = 2, and a , p, y are


known functions of r and d.
Chapter 22

Heavy Points and Heavy Balls

22.1 The number of heavy points


Theorem 20.11 described the properties of the number Q k ( n ) of the points
z E Z d (d 2 3) visited exactly k times ( k = 1 , 2 , . . .) whenever k is a fixed
positive integer. Now we wish t o study the properties Q k ( n ) when k = k ( n )
converges to infinity. By Theorem 20.7 Q k ( n ) = 0 if k = k ( n ) 2 ( 1 + ~ log
) n
and n is large enough.
Introduce the following notations:

U ( k , n )= # { j : 0 < j 5 n, ( ( S j ,00) = k, Sj # Se (e = 1,.. . , j - I)},

THEOREM 22.1 (Cs&ki-Foldes-Rev&z, 2005) Let d 2 3,

p ( t ) = $1 - (t = 1 , 2 , .. .)
tn = t,(B) = [Xlogn - XBloglogn], ( n = 3 , 4 , . . . , B > 2)
x = -(log(l - y))-l

and H ( t , n ) an9 of

Then we have
lim sup I H ( t , n ) - 11 = 0 a s .
n+m t < t n

In order to prove Theorem 22.1 we introduce a few notations and prove


some Lemmas.

227
228 CHAPTER 22

Let

X , ( t ) = xi
1 i f S j # S i ( j = 0 , 1 , 2 ,..., i - 1 ) , < ( S i , m ) > t ,
0 otherwise,
X ( t , n ) = y,
1 if Sj # Si ( j = 0 , 1 , 2 , . . . , i - I), < ( S i , n )2 t ,
0 otherwise,
p i ( t ) = pi = I ( X i = l)(min{j : <(Si,j)2 t } - i ) ,
P i ( t ) = pi = fi(1 - Y
)?

Clearly we have

R(t,n) = cn

i= 1
yi.

The next lemma can be easily obtained by Lemma 17.11.


LEMMA 22.1 Let
EX( = p i ,
n n

where

An = x-
n
i=l
1
O(l)n1/2
if d > 4 ,
if d = 4 ,
if d = 3.
LEMMA 22.2

02 5 n p + pAnO(1) - n 2 p 2+ 2 ( I + I I +III)
where
H E A V Y POINTS A N D H E A V Y B A L L S 229

Proof. Clearly we have

n n

i=l l<i<j<n i=l

<np
- + ~ A , O ( I +) 2 C E X ~ X- n~2 p 2 .
l<i<j<n
Further

C
l<i<j<n
EXiXj = C
lsi<j<n
P{Xi = I , Xj = I } =I +Ir +III.
Hence Lemma 22.2 is proved.
LEMMA 22.3
(22.1)

(22.2)

Proof. (22.1) follows from (17.6) and Lemma 17.13. Since

z, + s, = P{T < T,} + P { T , < T } 5


5 P { S n = 0 for some n } + P { S n = z for some n}.
By Lemma 17.8 we have (22.2).
230 CHAPTER 22

LEMMA 22.4 For t 5 t,, any E > 0 and large enough n we have

(22.3)

Proof. Now we need to estimate the probability

P { X , = 1, X j = 1, pi 2 nm}.
Define the events Bk by

Bk = {<(si,m)
-<(si,i)
+ < ( s j , m ) - < ( S j , i )= k }
and consider the IC time intervals between the consecutive visits of {Si, S j } .
Then a t least one of these intervals is larger than

(22.4)

(provided that {xi = 1, X j = 1, pi 2 n m } ) .Denote this event by D k .


Then

(22.5)
X€Zd k22t-I

The event BkDk means that placing a new origin at the point Si and
starting the time at i there are exactly k visits in the set (0, z} and at least
one time interval between consecutive visits is larger than n m / k . Hence
applying (17.7) of Lemma 17.14 and Lemma 17.11 we have

- O(l)kd/2n2/d-1(z, + S , ) k - l
<
where 0(1)is uniform in k and x, hence

k22t-1 k22t--1

< O(l)n2/d-1td/2(z z
- + S,)2t-2. (22.6)
H E A VY POINTS A N D HEAVY BALLS 23 1

By (22.5) and (22.6) we have

(22.7)

where R = t1/(d-2).
By (22.1) and Lemma 17.5

(22.8)

By (22.2) we have

L 0(1)P2. (22.9)
By (21.7), (21.8) and (21.9)

(22.10)

5 Xlogn, we have Lemma 22.4.


Since t,
LEMMA 22.5 Let i < j . T h e n f o r t 2 1 integer we have

P{Xi = 1, xj = 1) 5 cp2 (+1


tdl(d-2)
( j - i)d/2 (&)" )
where c is a constant, independent of i , j , t .
232 CHAPTER 22

Proof.

= c
XEZd
+
P{Sj-i = Z}P{<(O,00) <(z, 00)2 2t - 1)

= c
XEZd
P(Sj4 = 2}(2, + Sz)2t-l.
By (22.8) and (22.9) we get Lemma 22.5.
L E M M A 22.6 For t 5 t, any E > 0 and large enough n we have

(22.11)

Proof. It is a simple consequence of Lemma 22.5.


L E M M A 22.7
p2n2
2
+
111 5 - 0 ( 1 ) ~ ~ / ~ ~ ~ . (22.12)

Proof. Let
A = {Si is a new point i.e. Si # Sj j = 1 , 2 , . . . ,i - l},
2 t - l},
B = { ( ( S i , i + n a ) - <(Si,i)
D = {Sjis a new point},
E = { ( ( S j , 0 )- <(Sj,d2 t - 11,

B C H = { ( ( S i ,00)- ( ( S i , i ) 2 t - l}.
Let j > i + 3na. Then
P{Xi = 1, X j = 1, pi < na}
5 P{ABDE} 5 P{ABGE} = P{A}P{B}P{G}P{E}
-< P{A}P{H}P{G}P{E} = y(i + 1)(1 - y)'-'y((j - i)/3)(1 - Y ) ~ - '
Clearly we have

111 5 c y ( i + 1)(1 - y)'-'y((j - i)/3)(1 - Y)~-'


HEAVY POINTS A N D HEAVY BALLS 233

( j -O(l)
i)d/2-1 ) (1 + g)
<
- y2(1 - y ) 2 t - 2 [ (:) + 0(1)(K+ L f M ) i
where

M=C- 1 1
<
id/2-1 ( j - i)d/2-1 - nA,.

Hence we have Lemma 22.7

LEMMA 22.8
0: = O(l)(np +p 2 1.8
72 ). (22.13)

Proof is based on Lemmas 22.2, 22.4, 22.6 and 22.7. The numerical values
of X can be obtained by a result of Griffin (1990):

1- 7 3 = 0.341,
1- 7 4 = 0.193,
1 - 75 = 0.131,
1- 7 6 = 0.104.

Consequently

A3 = 0.929,
Xq = 0.608,
A,=, 10.492,
As = 0.442.

By using tn < X log n, one can verify (numerically)

for d = 3 and hence also for all d 2 3. By choosing an appropriate E we can


see that each term on the right-hand sides of (22.3), (22.11) and (22.12) is
smaller than the right-hand side of (22.13), proving Lemma 22.8.
Lemma 22.8 implies
234 CHAPTER 22

Proof of Theorem 22.1. Now we prove Theorem 22.1 in case

(22.14)

Consequently, since t , < X log n,

(22.15)

Choose C > 2, n ( k ) = exp(k/logk). (22.13) and Borel-Cantelli lemma

(22.16)

and
lim
n(k + 1) = 1.
k+m n(k)
Hence for any E > 0 and large enough n,

since t 5 t , 5 t ( n ( k + 1)).Similarly,

Hence we have Theorem 22.1 in case (22.14).


H E A V Y P O I N T S A N D HEAVY B A L L S 235

The other three statements of Theorem 22.1 can be obtained similarly.

Consequence 1. Apply Theorem 22.1 for

Since R(1,n ) = R(n) and p(1) = y,we obtain (21.3)for d 2 3 as a special


case of Theorem 22.1.

Consequence 2. Apply Theorem 22.1 for

We obtain Theorem 20.11 for d 2 3 as a special case of Theorem 22.1.


As we mentioned Hamana (1995) proved a CLT for Q(t,n)whenever t
is fixed. It looks interesting to try to prove the CLT in case t = t , + 00.

Consequence 3. Apply Theorem 22.1 for

R ( t , n ) , t = [Xlogn - XBloglogn], B > 2


H ( t , n ) = ___
ndt)
we obtain
[ ( n )2 Xlogn - XBloglogn B > 0.
i.e.
Xlogn - XBloglogn E LLC([(n)).
A theorem on the upper classes of E(n) is given in RBvitsz (2004):
+ +
Xlogn (1 &)XloglognE UUC([(n)),
+
Xlogn (1 - a - &)XloglognE ULC([(n))
if
2
d25, a=- 0 < & < 1- a .
d-2’
We do not have any non-trivial result about LUC([(n)). For example the
following question looks interesting: Is it true that

<(n)5 X log n i.0. a.s.

Remark 1. Without any new idea one can prove the following slightly
stronger version of Theorem 22.1:there exists an E > 0 such that
lim nEsup I H ( t , n ) - 1)= 0 a.s.
n--tcr,
tltn
236 CHAPTER 22

22.2 Heavy balls


Theorem 20.7 told us how heavy can be the heaviest lattice point. In fact
we saw that
lim -I(n)
n+cc x log n - 1 a s .
Similarly we might ask about the weight of the heaviest ball. Let

S(1)= {x : z E Z d , 1x1 = 1)

and for any A c Zd let

Then we are interested in the limit properties of

An answer of this question is

THEOREM 22.2 (CsBki-Foldes-R6v6sz-Rosen-Shi, 2005)

In order to prove this Theorem we present two notations and one Lemma.
1. Let 6 be the probability that starting from S(l)the particle returns
to S(1)before visiting 0 i.e.

6 = P{min{Ic : Ic > 1, lSkl = l} < min{k : Ic > 1, sk = 0 ) ) .


2. Let T be the number of outward excursions from S(1)t o S(1),i.e.
rn
7- = C I { S , E S(l), ISj+lI > l}.
j=1

3. Let Z ( A ) denote the number of visits in the set A up t o first return


to 0, i.e.
r
Z ( A ) = X I { & E A}.
n=l
H E A V Y POINTS A N D H E A V Y B A L L S 237

LEMMA 22.10
1
6=1- (22.17)
2d(l - 7 ) '

(
P { p m ( S ( l ) ) = j } = 1- d - - id)(.9 + &)j-l , (22.18)

P{T= M , [(O, CO) =N )


= (N + M ) ( l - d - $ ) d q $ ) N . (22.19)

Proof. Clearly
1
P(Z(S(1)) = j , T < CO} = $-I-
2d '
(22.20)

Summing (22.20) in j we get

which, in turn, implies (22.17). The proof of (22.18) and (22.19) can be
obtained similarly.

Proof of Theorem 22.2. First we prove the upper bound. Note that

and

where
238 CHAPTER 22

Hence for any E > 0 we have


P{ SUP Poo(S(1)
XEZd
+ Z)l(TS(I)+,i n ) > ( c + E ) logn} I n-.
Now the upper bound can be obtained by the usual application of the
Borel-Cantelli lemma.
Now we turn to the lower bound. Let h < c and ni = i( 1 0 g n ) ~
(i = O,1,. . . , [ n @ ~ g n ) - ~Since
]).

p{ s uppn( S ( l ) ) < hlogn} i ( P { P [ ~ ~ ~ ~ I ~ ( S ( ~ ) )< h l ~ g n } ) ~ ( ' ~ g ~ ) - ~ ,


X

applying again the Borel-Cantelli lemma we have the lower bound.


It is natural to ask how can the set S(1)be replaced in Theorem 22.2
by another bounded subset A of Zd. This question was studied by Cs6,ki-
Foldes-Rkvksz-Rosen-Shi (2005). They proved the following three Theo-
rems.
THEOREM 22.3
1
lim sup Pn(A + x) -
-- a.s.
n+ooxEZd logn lOg(1 - 1 / A A )

where A is a finite subset of Zd, AA is the largest eigenualue of the ( A (x (A(


matrix GA(x,y) = G ( x - 3) (x,y E A ) and G ( x ) = CEO P { S k = x}.
To get the exact value of A* is very hard except for some simple A. For
example we have
THEOREM 22.4
1 t, + (22.21)
A{O,Y) =-
Y
where t, = P{T, < m} and
2
AB(O,l) = (22.22)
2 - 29 - (292 + 2/d)1/2
where B ( 0 , l ) = {z : z E Zd, 1x1 5 1).

Remark 1. (22.21) shows that the maximal total time spent in a small
neighbourhood of any point is less than the number of points in the neigh-
bourhood times the maximal local time for points. To study this phe-
nomenon further, consider the sets of two points not too far from each
other.
HEAVY POINTS AND HEAVY BALLS 239

THEOREM 22.5 Put ~ ( n=)(logn)K with any K > 0. Then

lim max
Pn({Z,Y}) - 1 -2
- <
n+Oo Z,YED logn 2-7 log(1-y)
log 2( 1 - y)
where
D = {Z,Y : Z,Y E Zd, (Z - Y( 5 ~(nn)}.
This Theorem expresses the fact that any two points with local time up
to time n , both close to the maximum, should be at a distance larger than
any power of logn.

22.3 Heavy balls around heavy points


Theorems 22.2 and 22.4 tell us that for any E > 0 there exist sequences
{un= un(&)E Zd, n = 1 , 2 , . . .} and {vn = 'u,(E) E Z d , n = 1 , 2 , . . .} such
that
1
cLn(un + S(1))2 -(1- E) logn (22.23)

and
(22.24)

We are interested in the properties of the sequences { u n } , { v n } . For


example how big (or how small) [(u,,n)and [(v,,n) can be. It is not
hard to prove that none of them can be very big. More precisely for any
0 < E < 1 there exist a X I < X and a Xz < X such that (22.23) and (22.24)
hold true, but
E('1Ln,n ) 5 A1 logn
and
E(vn, n>I A2 log=
Let
3(/3,n)= {Z : z E Zd, ((.,n) = [Vlogn]},
( O < P < l ; n = 2 , 3 , ...)
240 CHAPTER 22

Conjecture. There exist functions f(P, S ) and g ( P , B ) such that


lim ‘(P’ n’ = f(P, s), a.s.
12-00 logn
lim ‘ ( P , n’ B , = g(P, B ) , a..s.
n-oo logn
1

22.4 Wiener process


The questions formulated for random walk in Sections 22.1 and 22.2 can be
reformulated for Wiener process. Such problems were initiated and solved
by Perkins-Taylor (1987) and Dembo-Peres-Rosen-Zeitouni (2000).
Let
/$(A) = 1T
IA(W(t))dt
and let
~ ( x , r=) {y : y E ( z - yI 5 r } .
THEOREM 22.6 There exist absolute constants 0 < c1 < c2 < m such
that
(22.25)

for all x E { W ( t ) ,0 5 t 5 T } if E is small enough.


This Theorem is a straight analogue of Theorems 22.2 and 22.3. It
suggests the following two questions:
1. What can we say about c1 and cz?
2. What can we say about the number of heavy balls? More precisely
we ask about the Holder dimension of the set of the centers of the heavy
balls.
The next Theorem claims that c1 and c2 are about 4 q i 2 where q d is the
first positive zero of the Bessel function Jd/2-2(x).
THEOREM 22.7 Let d 2 3. T h e n for any T > 0 and all 0 < a < 4qY2
Chapter 23

Crossing and Self-crossing


It is easy to see (and Theorem 21.1 also implies) that the path of a random
walk crosses itself infinitely many times for any d 2 1 with probability 1.
We mean that there exists an infinite sequence {U,, Vn}of positive integer
valued r.v.'s such that S ( U n ) = S(Un + V n ) ,and 0 5 U1 < U2 < . . . , ( n =
1 , 2 , . . .). However, we ask the following question: will self-crossings occur
after a long time? For example, we ask whether the crossing S(Un) =
+
S(U, V,) will occur for every n = 1 , 2 , . . . if we assume that V, converges
t o infinity with a great speed and Un converges to infinity much slower. In
fact Erdos and Taylor (1960/B) proposed the following two problems.

Problem A. Let f ( n ) be a positive integer valued function. What are the


conditions on the rate of increase of f(n)which are necessary and sufficient
to ensure that the paths {SO, S1, . . . , Sn} and { S,+f(,) , S,+f(n)+l,. . .} have
points in common for infinitely many values of n with probability l ?

Problem B. A point S, of a path is said to be "good" if there are no points


common to {So,5'1,. . . , Sn} and {Sn+l,Sn+2,.. .}. For d = 1 or 2 there are
no good points with probability 1. For d 2 3 there might be some good
points: how many are there?
As far as Problem A is concerned we have
THEOREM 23.1 (Erd6s-Taylor, 1960/B). Let f(n) t 00 be a positive
integer valued function and let En be the event that paths

{SO,Sl,...,Sn)
and {Sn+f(n),S,+f(n)+11...}

have points in common. Then


(i) for d = 3, if f ( n ) = n((p(n))' and p ( n ) is increasing, then
P{En i . o . } = 0 or 1 (23.1)
depending on whether Crzl((p(2'))-l converges or diverges,
(ii) f o r d = 4, if f ( n )= n x ( n ) and x ( n ) i s increasing, then we have (23.1)
depending on whether CEl (kx(2'))-' converges or diverges,
(iii) for d 2 5 , if
f ( m ) 2c-f ( n )
sup -
m2n m n

241
242 CHAPTER 23

(for some C > 0 ) then we have (23.1) depending on whether

n=l

converges or diverges.
An answer of Problem B is
THEOREM 23.2 (Erd6s-Taylor, 1960/B). For d 2 3 let G ( d ) ( n be ) the
number of integers r (1 5 r 5 n ) f o r which (SO,S1,. . . ,S r ) and (ST+l,Sr+2,. .
have no points in common. Then
(i) if d = 3, f o r any E >0
P { G ( 3 ) ( n )> nl/’+€ i.0.} = 0,

(ii) if d = 4,

G ( 4() n )log n G ( 4() n )log n


{
P 0 = liminf
n+m n
5 limsup
n+m n

(iii) i f d 2 5,
G ( d()n )
lim ~ = TA a.s.

where T d is an increasing sequence of positive numbers with Td f 1 as


d+m.

Remark 1. Applying Theorem 23.1 for d = 4 and f ( n ) = n - 1 we find


that for infinitely many n the paths {So,S1,. . . , Sn} and {S2n,Szn+l,. . .}
have a point in common. This statement is not true for d 2 5.
Our next theorem is intuitively clear by Remark 1.
THEOREM 23.3 (ErdBs-Taylor, 1960/B, Lawler, 1980). F o r d = 4, two
independent random walks which start from any two given fixed points have
infinitely many common points with probability 1; whereas for d 2 5, two
independent random walks meet only finitely often, with probability 1.

Remark 2. Theorem 23.3 tells us that the paths of two independent


random walks in Z d ( d 5 4) cross each other. It does not mean that the
particles meet each other.
One can also investigate the self-crossing of a d-dimensional Wiener
process. Dvoretzky-Erd6s-Kakutani (1950) proved the following beautiful
theorem:
CROSSING A N D SELF-CROSSING 243

THEOREM 23.4 For d 5 3 almost all paths of a Wiener process have


double points (in fact they have infinitely many double points), i.e. there
exist r.u.’s 0 5 U < V 5 00 with W ( U )= W ( V ) . For d 2 4 almost all
paths of a Wiener process have no double points.

Remark 3. Comparing Theorems 23.1 and 23.4 in the case d = 4 we obtain


that for infinitely many n the paths { S O ,... , S,} and {SZ,, SZ.+I,. . .} have
a point in common, while for any t > 0 the paths { W ( s ) ;0 5 s 5 t } and
{ W ( s ) ;2t 5 s < co} have no points in common with probability 1. This
surprising fact is explained by Erd6s and Taylor (1961) as follows: for d = 4
with probability 1 the paths { W ( s ) ;0 5 s 5 t } and { W ( s ) ;2t < s < 00)
approach arbitrarily close to each other for arbitrarily large values of t.
Thus they have infinitely many near misses, but fail to intersect. This
explanation suggests the following:
Problem C . Characterize the set of those functions f(*)for which

lim f ( t ) inf (IW(s)- W ( u ) (=( 0 a.s. (d = 4).


t+m O<s<t
27 <
u

Since the paper of ErdBs-Taylor (1960/B) a number of new results about


the crossing of independent random walks appeared. Here we present some
of them.
Let S ( p , n ) = { S l ( n ) , S z ( n,...,
) S p ( n ) }( p 2 2, n = 1 , 2,...) where
S1 ( n ), SZ( n ), . . . ,S, (n) are independent random walks on Z d . Then

I, = 2
ki,kz,...,k,=l
I ( S l ( k 1 ) = S z ( k 2 ) = . . . = SP(kP))

is the intersection local time of S ( p , n).


It is relatively easy to prove that
lim In = 00 a.s. (23.2)
n+m

if and only if p ( d - 2) 5 d .
In case d = 1 the rate in (23.2) was studied by X. Chen and W. Li
(2004). They proved

lim sup n - - ( ~ + ~ ) / ~log


( l ~o )g- - ( P - ~ ) / ~ I ,
n+cc
244 CHAPTER 23

where B ( . ,.) is the ,&function.


The case d = 2, p 2 2 as well as the case d = 3, p = 2 was studied by
X. Chen (2004). He proved

lim sup
n+m
In
n(l0g log n)”-1
= (:) P-1
n(2,p)zp a.s.

if d = 2, p 2 2 and

In
lim sup = n(3,2) a.s.
n+m (n(logl0gn)3)~/~

if d = 3, p = 2 where n ( d , p ) is known.
The case d = 4, p = 2 is settled by Marcus and Rosen (1997):

In 1
lim sup - - a.s.
-
n+m log n log log log n 279

In case d = p = 3 Rosen (1997) proved that

lim sup
In _ -1 as.
-
71-00 log 71 log log log n T

The very interesting monograph of Lawler (1991) is proposed to the


reader who is interested in a more detailed study of the subject of this
Chapter.
Chapter 24

Large Covered Balls

24.1 Completely covered discs centered in the


origin of Z2
We say that the disc

Q ( r ) = {z E z2, 1141 5

is covered by the random walk { S k } in time n if

<(z, n ) > 0 for every z E Q ( r ) .

Let R(n) be the largest integer for which Q(R(n))is covered in time n.
The Recurrence Theorem of Chapter 17 implies that

lim R(n) = co as. (24.1)


n+cc

We are interested in the rate of convergence in (24.1). We prove

THEOREM 24.1 (Erdbs-Ri.v&z, 1988, Ri.v&z, 1989/A, 1989/B, Auer,


1990). For a n y E > 0 and C > 0 we have

~ n) E UUC(R(~)),
exp ( 2 ( 1 o g n ) ~ /log, (24.2)

exp (S 1
(lognlog, n)ll2 E U L C ( R ( ~ ) ) ,

exp (C(logn)'/2) E L U C ( R ( ~ ) ) ,
(24.3)

(24.4)

exp ( ( l o g n ) ' / 2 ( l o g l o g n ) - ' / 2 - ' ) E LLC(R(~)). (24.5)

A sharper statement than (24.2) and (24.3) was proved by Hough-Peres


(2005). They proved that

(24.2-3*)

245
246 CHAPTER 24

Here we prove only (24.2) - (24.5). In fact instead of


(24.3) the stronger Theorem 24.5,
(24.4) the stronger Theorem 24.7,
(24.5) the stronger Theorem 24.3
will be proved. The proof of (24.2) is given as stated above.
About the limit distribution of R(n) we prove
THEOREM 24.2 (Rkvksz, 1989/A, 1989/B). For any z > 0

(24.6)

A sharper statement than (24.6) was proved by Lawler (1993). He


proved that

exp(-4z) 5 Iiminf P
n-+m log n

Note that even the proof of the existence of the limit distribution is hard.
The final result is due to Dembo, Peres, Rosen and Zeitouni (2005).
They proved that

lim P
n+m
{ (10gR(n))2
log n
> z} = exp(-4z). (24.6*)

Here we prove only (24.6). In fact instead of


the upper part of (24.6) the stronger Theorem 24.4,
the lower part of (24.6) the stronger Theorem 24.6
will be proved.
In order to prove Theorem 24.2 at first we introduce a few notations
and prove some lemmas.
Let Q ( T ) be the probability that the random walk {S,} hits the circle
of radius T before returning to the point 0 = (0, 0), i.e.

Q(T) = P{inf{n : llSnll 2 T } < inf {n : n 2 1, Sn = 0)).


Further, let

p ( 0 w z) = P{inf{n : n >_ 1, Sn = O} > inf{n n >_ 1, S, = z}}


:
= P{ { S n }reaches z before returning to 0).
LARGE COVERED BALLS 247

LEMMA 24.1
lim a(.) logr = ~ / 2 . (24.7)
T+CC

Proof. Clearly we have

{inf{n : IlSnll 2 r } > inf{n : n 2 1,Sn = 0}}


c {E(0,r210g2r)> 01+ {O < k <max
r 2 log2 I'
J I S ~5I I r ) .

Since by Lemma 20.1

P{[(O, T 2 log2 T ) = 0 ) M n/2 log T

and by a trivial calculation (cf. Theorem 18.5)

we have
n- + o(1)
2 210gr *

Observe also

Since by Theorem 18.5

P{ max IlSkll 2 r } = exp(-O(logr)),


Ojk_<r2(1og?,)-I

applying again Lemma 20.1 we obtain (24.7).


Remark 1. Lemma 24.1 is closely related to Lemma 18.1.

LEMMA 24.2 There exists a positive constant C szlch that

2 2. Further,
for any x E iZ2 with JJxIJ
248 CHAPTER 24

Proof. Let z = IIzlleZLp.Then by Lemma 24.1 the probability that the


particle crosses the arc Ilzllei@ (q- 7r/3 < II,< q + n / 3 ) before returning to
11z11)-' (for any E > 0 if IIxlI is big enough).
0 is larger than (1 - ~ ) n ( 6 l o g
Since starting from any point of the arc (Izllei@( q - n / 3 < II,< q+7r/3) the
probability that the particle hits z before 0 is larger than 1/2 we obtain the
lower estimate of the liminf. The upper estimate is a trivial consequence of
Lemma 24.1.
Spitzer (1964) obtained the exact order of p ( 0 -A z). He proved

LEMMA 24.3 (Spitzer, 1964, pp. 117, 124 and 125).

LEMMA 24.4 Let

Yi = < ( z , p i ) - J(z,pi-l) - 1 (i = 1 , 2 , . . .)

and Zi = -Y,. T h e n there exists a positive constant C* such that for a n y


S > 0 and f ( n )t 00 for which n / f ( n )+ cc we have

(24.8)

and

(24.9)
~I

where
1 - p ( 0 2.t z)
u2 = EY: =2
P(0 x)
and

(24.10)

(24.8) and (24.9) combined imply

(24.11)
LARGE COVERED BALLS 249

Proof. By a simple combinatorial argument (cf. Theorem 9.7) we get

- -
P{K = -1) = 1 - p ( O - x),
P{Yl = k} = (1- p( 0 z))"p(O
pi = EY1 = EZ1 = 0,
x))2 (k = 0 , 1 , 2 , . . .),

00

< (-l)mq
- + x p 2 q k k ( I c+ 1)* .. (k + m - 1)
k=l
= (-l)mq + rn!pl-mq
where p = p ( 0 2.) x) and q = 1 - p . Hence

and
s zs
, x 6o-ln-l/4.

Then by Lemma 24.2 and condition (24.10) we have

if n is big enough. Hence


250 CHAPTER 24

(Ic = 3 , 4 , . . .) arid

Similarly
22
I logEexp(sZ1)I = /!!!J(-s)l L 2+ s3es + -
Is2 S2

P2 .
Let
F,(k) = P{Y1 + Y2 + . . . + Y, = k}
and define a sequence U1, U2,. . . of i.i.d.r.v.'s with
P{U1 = k} = e -$(s)eskP{Yl = k}.

Then
P{Y~+Y~+..-+Y,> 6an3/4} = C F, ( I ~ )= en$(s)
k > b ~ n ~ / ~
c ePSkV,(k)
k > b ~ n ~ / ~

where
V n ( k )= P(U1 + u2 + . . . + u,= k}.
Hence
P(Y1 + ~2 + . . + Y, > 6 m 3 / 4 }
+

1
5 exp(n+(s) - ~ b m ' / / " ) ~

and we have (24.8).


Note that the proof of (24.9) is going on the same line but instead of
the sequence {U,} we have to use the sequence {U;} defined by

P{U*1 -- I C )= e-$(-S)eSkP{Z1 = IC)


LARGE COVERED BALLS 25 1

and we have the lemma.


Now we turn to the proof of (24.5). In fact we prove the much stronger
THEOREM 24.3 (Auer, 1990). For any E > 0 we have

(24.12)

Remark 2. Note that (24.5) claims that the disc around the origin of
radius r, is covered in time n. The meaning of (24.12) is that the very

be visited about J(0,n) -


same disc is “homogeneously” covered, i.e. every point of this
logn times during the first n steps.
one-dimensional analogue of this theorem, cf. Theorem 11.20.
disc will
For the

Proof of Theorem 24.3. Clearly

and by Lemma 24.4 we have

where A , = exp(n’/2/f(n)). Let f(n) = (logn)Ethen by the Borel-Cantelli


lemma we obtain

which, in turn, implies


252 CHAPTER 24

Then replacing n by ((0, N ) and recalling that

log N
E ( O , N , 2 (log log N)1+' a.s.

for all but finitely many N (cf. Theorem 20.4) we obtain the theorem.
In order to prove (24.2) of Theorem 24.1 we introduce a few notations
and we prove three lemmas. Let

1 if [ ( z , n )> 0,
I ( z , n )=
0 if [ ( z , n )= 0,

k = 2 , 3 , . . . we have

where u E Z 2 and v E Z2 are defined by

In order t o present the proof in an intelligible form we prove Lemma 24.5


first in the case k = 2. That is, we prove

LEMMA 24.6 For any 0 < q < 1 we have

Proof.

m z ( z , y ; n ) = P { l ( z , n ) = l , I ( y , n ) = l}
LARGE COVERED BALLS 253

n
= C P { I ( z , n )= l , I ( y , n ) = 1 I v, = k < vy}P{v, =k < VY}
k=O
n
+ x p { I ( ~ , n =) l , I ( y , n ) = 11 vy = k < vz}P{vy = k < vz}
k=O
n
= C P { I ( y , n ) = 1 1 v, = k < vy}P{., =k < vy}
k=O

+c n.

k=O
P { l ( z , n ) = 1 I vy = k < v,}Pvy =k < v,}

= cn

k=O
P{I(y - 2 , n - k ) = l}P{v, = k < vy}
n

+ CP{I(z- y , n - k ) = l}P{vy = k < vz}.


k=O

Consequently we have

which implies the upper part of Lemma 24.6.


We also have
qn
m z ( 2 , y ; n ) 2 C P { I ( y - z , n - k ) = l}P{v, = k < vy}
k=O
qn
+ CP(I(2- y , n - k) = l}P{vy =k < v,}
k=O
2 P{l(z - y,n - qn) = I}P{I(z,qn) = 1 or I ( y , qn) = I}
=ml(z-Y;n-qn)[ml(z;qn) + w ( y ; q n ) -ma(z,y;qn)l.

Hence we have Lemma 24.6.

Proof of Lemma 24.5. Let P k (resp. P k ( T ) ) be the set of permutations


of the integers 1 , 2 , . . . , k (resp. 1 , 2 , . . . ,r - l , r + 1 , . . . , k ) . Further, let
< v ( z ~<~. ). . < v ( z i k - , )< v(z~,)= j } .
A = A ( i l , i z , .. . , i k ; j ) = {~(zi,)
254 CHAPTER 24

Then we have

Consequently

= P { l ( ~ , n )= 1) C C P{A} = P {l( v ,n ) = 1)
~ =( il1 , . . . , i k - 1 ) W k ( T )
l < j < n , i k =T
k
C { l ( z j , n =) 1; j = 1 , . . . + 1 , . . . , k}
X P
{ T=l
,T - 1, T

=m1(v,n) M(n)+mk
[ (-(i)+ (3 --+l)k+l(;))]

k
= 1; j = 1 , . . . , T - 1,T + 1 , . . . ,

= m1(u,n - qn)[M(qn) (1 - k)mk]. +


Hence we have Lemma 24.5.
Let N = N ( n ) / 00 and k = k ( n ) 7 00 be sequences of positive
integers with N ( n ) < n1/3. Assume that there exists an E > 0 for which
LARGE COVERED BALLS 255

k(n) 5 (N(n))'. Then for any n = 1 , 2 , . . . there exists a sequence x1 =


21(72), 2 2 = 2 2 ( 7 2 ) , . . . , Xk = X k ( 1 2 ) E z 2 ( k = k(n)) such that

N - 15 Il~:ill5 N (i = 1 , 2 , . . . , k ) ,
N1-€ 5 5N
1 1 ~ i- ~ j l l (1 5 i < j 5 k).
Now we formulate our
LEMMA 24.7 For the above defined x1,x2,.. . , x k we have

(24.13)
Further, if

N(n) = exp((logn)1/210g, n ) and k(n) = exp((logn)1/2) (24.14)

then
5 exp(-(2-4E)lo&.n).
mk(xl7Z2,...,xk;n) (24.15)

Proof. By Lemma 24.5 and Theorem 20.3 we have


(l-?logN"
( 1 + 0 ( log, N1-' )))M(n)
log n log N1-&
mk 5
log n

(
< - 1 - -k210gN1-E
- k
log, N1-'
logn ( l + 0 ( log N1-' ) ) ) M ( n )

( 210gN1-E (1+0(log, N1-€ ) ) ) k M ( n ) < . . .


S e x p --
k logn log N1-€

L exp
(1 + 0 log, N1-€ (log k) . ))
log N1-&

Hence we have (24.13). If (24.14) holds then

exP
(-21°gN1-& (
(1 + 0 log, N1-" )) log k)
log n log nl-&

and we obtain (24.15).


Now we can present the
256 CHAPTER 24

Proof of (24.2). Let N ( n ) be defined by (24.14) and let

nj = [exp(eJ>l,
i q n ) = exp(2(logn)1/2log, n ) ,
1cj+1 = exp((lognj+1)1/2).

Then

P{R(nj+l) 1 N n J } 5 P{R(%+l) 2 N(%+l))


< mkj+,(Xl,X2,. . . x k j + l ; nj+l) 5 ( j + i)-(2-4E)
- J

where xi = zi(nj+l) (i = 1 , 2 , . . . , kj+l). Hence by the Borel-Cantelli


lemma
~ ( n j +5~fi(nj) ) a.s.
for all but finitely many j . Let nj 5 n < nj+l. Then
R(n) L R b j + l ) I q n j ) I N ( n )
which proves (24.2).
Now we turn to the proof of the upper inequality of (24.6). In fact we
prove a bit more:

THEOREM 24.4 ( R M s z, 1990/A). For any z > 0 and for any n =


2 , 3 , . . . we have

(24.16)

In order to prove Theorem 24.4 let

Then for any n = 1 , 2 , . . . there exists a sequence y1 = y1 ( n ) yz


, = yz(n), . ..,
y~ = y j ~ ( n such
) that

M - 1 5 l[yJl 5 M (i = 1 , 2 , .. . , K ) ,

M3l4 5 J J y-
i y.j/jl) 5 M (15 i < j 5 K ) .
if C* is small enough. Now we formulate our
LARGE COVERED BALLS 257

LEMMA 24.8

Proof. In the same way as we proved Lemma 24.7 we obtain

Hence we have Lemma 24.8.

Proof of Theorem 24.4. Clearly

P { R ( ~ )2 exp(C(logn)’/’)} 5 m K 5 exp
(-?)
which proves Theorem 24.4.
Instead of proving (24.3) we prove the following stronger
THEOREM 24.5 For any 0 < 0 < (7r/120)1/2,9/10 < 6’ < 1 and
E> 0 we have

where
0
h ( n ) = exp (1- &)-(lognlog, n)l/’
).
-
(
In order to prove Theorem 24.5 let p1(0
pl(z ?.t 0), p2(z
6
- -
x),p2(0 x), . . . resp.
0), . . . be the first, second, . . .waiting times t o reach x
from 0, resp. to reach 0 from z, i.e.

p 1 ( O u z) = inf{n : n _> 1, S, = z},


pl(z-
p2(O -+ z) = inf{n : n 2 p1(O -
X) pl(z + -
0) = inf{n : n 2 pl(0- x), S, = 0) - pl(0- x),
O), Sn = X}

pz(x - -(Pl(O~~)+Pl(~-o)),

- ( P I P - + z) f p 1 ( z -
+
0) = inf{n : n 2 p1 (0 cu) x) p1 (x 0) -
0)+p2(0-
+ pz (0
x)),.. .
2.1 z), S, = 0}
258 CHAPTER 24

Let ~ ( 0 z , n ) be the number of 0 -+ z excursions completed before


n, i.e.

T(O - 2,721 = max


{ i :C(pj(0
ji-l
=1
- z) +pj(z - 0)) + pi(0 - z)
1
5n .

In the proof of Theorem 24.5 the following lemma will be used.


LEMMA 24.9 For any 0 <0< (7r/120)1/2,9/10 < S2 < 1 and n big
enough we have

(24.17)

(For p n , see Notation 6.)


Proof. Let q = 1 - p = 1 - p ( 0 w z). Then applying Bernstein inequality
(Theorem 2.3) with E = Sp and Lemma 24.2 we obtain

Hence

(24.18)

which implies (24.17).


Proof of Theorem 24.5. (24.17) clearly implies that
LARGE COVERED BALLS 259

for any 0 < 0 < ( 7 ~ / 1 2 0 ) ~and


/ ~ 9/10 < d2 < 1.
Observe that Theorem 20.5 and (24.18) imply

n-'l2 inf
Ilzll<e@J;;
r (
02.t x,exp ((llo+g;F):n)) 2 1 - 6 i.0. a s . ,

which in turn implies Theorem 24.5.


Now, we have t o prove the lower inequality of (24.6). In fact instead of
proving the lower part of (24.6) we prove the following stronger
THEOREM 24.6 For any E > 0 and z > 0 there exists a positive integer
No = No(E,z)
such that

(24.19)
ifn 2 NO,0 < 0 < ( 7 ~ / 1 2 0 ) l /and
~ 9/10 < J2 < 1.

Proof. Theorem 20.2 and (24.18) imply that for any E > 0 and z > 0 there
exists a positive integer NO= No(&,z ) such that

and

if n 2 No. Consequently

2 P{ inf
Ilzll<e@Jii
r(0 2.t x,pn) >_ (1 - 6 ) n 1 / ~->P (Pn 2exp (%))
2 1 - E - (1 - exp(-nz)) -E = exp(-nz) - 2~

if n 2 N O .
Hence we have Theorem 24.6.
Instead of proving (24.4) we prove the stronger
260 C H A P T E R 24

THEOREM 24.7
(log n)1/2
exp ( log,n ) E LUC(R(n)).

Proof. Let R(n,N ) be the largest integer p for which the disc

&(p,n) = 1
. E z2, ISTI - 4 I P I
is completely covered by the path {&, & + I , . . . ,S N } i.e. for each z E
Q ( p , n ) there exists a k E [n,N ] such that <(x,N ) - [(x,n ) > 0.

Consequently

(2420)

Let

and

Apply (24.20) with n = ?zk , N = ? & + I .Then we get

Clearly R(nk,nk+l) is a sequence of independent r.v.'s. Hence


LARGE COVERED BALLS 261

if there were a ball around the origin having radius

completely covered a t time nk+1 then there were a ball around S,, of radius

(log(nk+l - n k ) ) 1 / 2
(’- log, nk

completely covered at time n k + 1 . Hence Theorem 24.7 is proved.


Theorem 24.3 clearly implies that for any fixed x E iZ2

E ( x , n)
lim - = 1 a.s. (24.21)
n+cc I(0,n)

It is worthwhile to note that for fixed x in (24.21) a rate of convergence


can also be obtained. In fact we have

THEOREM 24.8

where u ( x ) is a positive constant depending only on x.

Remark 3. Theorem 24.8 and Theorem 20.4 combined imply

lim
n-+m
( log n
(log log n)l+E
)’” 1- - 11 = 0 a.s.

Before presenting the proof of Theorem 24.8 introduce the following


notations: let Z1, Ear... be the local time of the walk at 0 during the first,
second, . . . 0 -A x excursions, i.e.
-
=1(x) = El = J(O,Pl(O -A x)),
I -
= 2 ( x ) = 5 2 = I(O,Pl(O-A x) +Pl(X-- 0) + p 2 ( 0 - x)) -El,
%(x) = % = <(O, R3) - (El + E 2 ) ,
... ..

where
262 CHAPTER 24

Observe that E1,22,. . . are i.i.d.r.v.'s with distribution


P(2.1 = k} = P(((0, p1(0 * z)) = k }
= (1 - p ( 0 * z))"-lp(O * z) ( k = 1 , 2 , . . .).
Consequently
EZ1 = (p(0* z ) ) - ~ ,
E(21 - ('(0 * z))-')~ = (p(0* ~ ) ) - ~ -( 1p ( 0 * x))
and
- - (HI + . .. + H n ) -- p / 2 (1- p ( 0 * z))1/2
lim sup
n+Oo (2n log log n)1/2 P ( 0 * XI
where
H1 =((z,P1(0*zC)+P1(2*O)),
Hz = E(z,pi(O* x) + p i ( x * 0) + p 2 ( 0 * z) + ~ 2 ( z *0)) - Hi,
... .. . .. .
Since
-+
2 1 52 + . . . + %(o-u.,n) 5 ((0, n ) 5 z1 + z2 + . . . + 5 T ( O ? - * z , n ) + l
and the sequence ~ ( *
0 2,n ) takes every positive integer we have

By the law of large numbers

Hence

and we have Theorem 24.8.


Remark 4. Theorem 13.3 claimed that the favourite values of a random
walk in Z1 converge to infinity. It is natural to ask the analogue question in
higher dimension. In the case d 2 3 the P6lya Recurrence Theorem implies
that the favourite values are also going to infinity. Comparing Theorem
20.4 and Theorem 20.7 (in case d = 2) we find that in Z2 the favourite
values are also going to infinity. Theorem 24.3 also says that the rate of
convergence is not very slow. In fact we get
LARGE COVERED BALLS 263

THEOREM 24.9 Let d = 2 and consider a sequence { z n } for which


J(z,, n ) = J ( n ) . Then for any E > 0 we have

Remark 5 . Replacing the random walk in Theorems 24.1 and 24.2 by a


Wiener sausage B,(T) one can ask whether these theorems remain valid.
A harder question is to study the case where r = r T 4 0. In fact Theorem
18.4 implies that if r T 5 T-(lOgT)E(with some E > 0) then the analogues
of Theorems 24.1 and 24.2 cannot be true anymore.

24.2 Completely covered disc in Z2 with


arbitrary centre
The results of Section 24.1 suggest the question whether the largest com-
pletely covered disc is located around the origin or somewhere else. Investi-
gating this question it turns out that the largest covered disc is much-much
larger than the one around the origin.
Formally speaking, let u E Z2 and define

Let r(n) be the largest integer for which there exists a random vector u =
u(n) E Z2 such that Q(u,r(n))is covered by the random walk in time n ,
that is,
((z, n ) 2 1 for every z E Q(u,r(n)).
Then we formulate the following theorem.
THEOREM 24.10 (Rkvksz, 1993/A, 1993/B) We have
n1I5O 5 r ( n ) 5 a.s.

for all but finitely many n.


This Theorem suggests the following:
Conjecture 1. There exists a 1/50 5 qo 5 0.42 such that
log r(n) = qo
lim
n+cc
- logn
a.s.

Zhan Shi conjectured qo = 1/4. This conjecture was nearly proved by


Dembo-Peres-Rosen-Zeitouni (2005/A). In fact they proved:
264 C H A P T E R 24

THEOREM 24.11
logr(n) - 1
lim -- in probability.
logn
~

n+oo 4

24.3 Almost covered discs centred


in the origin of Z2
The results of Section 24.1 claimed that the radius of the largest covered disc
is about exp((logn)1/2). Now we are interested in the relative frequency of
the visited points in a larger disc.
In order to formulate our results, introduce the following notations:
1 if E(x,n) > 0,
I ( x , n )=
O if [ ( x , n )= 0,

K ( N , n )= -
1
c
N2n x€Q(N)
I(x,n);

i.e. K ( N ,n ) is the density (relative frequency) of the points of Q ( N )covered


by the random walk { S k , 0 5 Ic 5 n}.
Our first theorem claims that if we consider the disc around the origin of
radius exp((1ogn)") ( a < 1) or even of radius exp(logn(loglogn)-2-E) ( E >
0) then the density of the covered points converges to one 1 a.s. In fact we
have
THEOREM 24.12 (Auer-Rkvksz, 1990). For any E > 0

lim K (exp
n+cc
( logn
(log logn)2+E
)
, n ) = 1 a.s.

Proof. Consider

where
N = Nn = exp

Then by Lemma 24.2 we have


LARGE COVERED BALLS 265

provided that ((XI( 5 N . Hence


1
- W , P,)) = -
~ ( 1K C ~ ( 1 I-( Z ,P,)) I exp(-c(log n)l+E),
N2r x E Q ( N )

and by the Markov inequality for any S >0


P{I - K ( N , p , ) 2 6) 5 8-' exp(-C(logn)'+')

which, in turn, by Borel-Cantelli lemma implies that


lim (1 - K ( N , p , ) ) = 0 a s . (24.22)
n+co

Let m = m, = [exp(n(logn)l+")].Then by Theorem 20.5 m, 2 pn a s .


for all but finitely many n. Hence (24.22) implies
lim (I - K ( N , m ) )= 0 a s .
,-+Do

Observe that given the choice of rn and N we have

log m )>N>exp( logm )


(log l o g ~ t ) ~ + ~ '
and we obtain

lim
n+cc K (ex. ((log logrn
log
) ,m)
m)2+e
= 1 as.

Consequently we also have

This proves the theorem.

24.4 Discs covered with positive density in Z2


Theorem 24.12 tells us that almost all points of the disc Q(exp((1ogn)"))
(1/2 < a < 1) will be visited by the random walk {So,5'1,. . . ,S,}. At the
same time by Theorem 24.1 we know that some points of Q(exp((1ogn)"))
will surely not be visited. We can ask how many points of Q(exp((1ogn)"))
will not be visited, i.e. what is the rate of convergence in Theorem 24.12?
However, it is more interesting to investigate the geometrical properties of
the non-visited points. For example: what is the area of the largest non-
visited disc within Q(exp((logn)"))? By non-visited disc we mean a disc
266 C H A P T E R 24

having only non-visited points. The following theorem claims that with
probability 1 there exists a non-visited disc of radius exp((log n)P) within
the disc Q(exp((1ogn)O)) for every p < a provided that a > 1/2.
Let
Q(u,T) = {x E Z2, 112 - ~ 1 51 T}.
Then we have

THEOREM 24.13 (Auer-Revesz, 1990). Let

1 / 2 < a < 1 and p <a.


Then there exists a sequence of random vectors u1, u2, . . . such that

and
I ( x , n ) = 0 for all x E Q(un,exp((logn)P)).

In order t o prove Theorem 24.13, first we introduce a notation and present


two lemmas.
Let N > 0 and ul,uZ,. . . ,uk E Z 2 (k = 1 , 2 , . . .) be such points for
which the discs
Qi = & ( u ~ , N )(i = 1 , 2 , . . . , k)
are disjoint. Denote by

mk(Qi, Qz, . . . ,Q k ; n)
= P{Vi = 1 , 2 , . . . ,k, 3yi E Qi such that I ( y i , n ) = 1)

the probability that the discs Q1, Qz, . . . , Qk are visited during the first n
steps.

LEMMA 24.10 Let

exp((logn)a) 5 llull < n1/3, N 5 exp((1ogn)O)


and
o < p < a < 1.
Then
m l ( Q ( ~ , N ) ; n5) 1- C(logn)a-l
f o r a suitable constant C > 0.
LARGE COVERED BALLS 267

Proof. It is easy to see that

Hence by Theorem 20.3

Hence Lemma 24.10 is proved.

LEMMA 24.11 Let

(i) O < P < a < l ,

(ii) N = exp((logn)P),

(iii) 211,212,. . . , '1Lk E Z2 be a sequence f o r which

Proof. Clearly

mk(Q1, Q2,. . . ,Q k ; n )

=P {u k

i=l
{YQj are visited before n and Qi is the last visited disc}

5P u
{ i=l
k
{the discs Q1, . . . , Qi-1, Qi+l, . . . ,Qk are visited before n }

x max max P{Qj is visited before 2n I S, = x}.


Z#J x€Qi
268 C H A P TE R 24

Hence by Lemma 24.10

m k ( Q i , .. . , Q k ; n )

x (1 - C(l0gn)a-1)

and

i=l /
I 1 + ( k - 1)(1- C(logn)"-l)
Since
1-a 1
1 + ( k - 1)(1- a ) - k E>
< -' ( 1 - - 5-exp
k

by induction

and we have Lemma 24.11.


Remark 1. Lemma 24.11 is a natural analogue of Lemmas 24.5 and 24.7.

Proof of Theorem 24.13. Let

1/2<cr<Cl+E<l, /3<a
and
N = exp((1ogn)P).
Then there exist k = k ( n ) = exp((1ogn)") points u ~ , u z .,.., U k such that

5 exp((Iogn)"+')
(Iui(( (i = 1 , 2 , . . . , k ) ,
LARGE COVERED BALLS 269

and
11xi - xjll 2 exp((logn)a+E/2) ( i , j = 1 , 2 , .. . , /c; i +j).
Then by Lemma 24.11

where Qi = Q ( u i , N ) . Choosing n = nj = e j the Borel-Cantelli lemma


implies that with probability 1 at least one Qi (i = 1 , 2 , . . . , Ic(nj))is not
visited till nj for all but finitely many j.
Let nj 5 n < nj+l. Then with probability 1 there exists a u with llzlll 5
exp((j+l)"+") such that the disc Q ( u , N ) ( N = Nj = exp((lognj)P)) is not
visited before n if j is large enough. Consequently for all but finitely many
n there exists a uo = uo(n,w) E Q(exp((logn)a+2E))such that Q(u0,N ) C
Q ( e ~ p ( ( l o g n ) ~ + is
~& not
) ) visited before n. This proves Theorem 24.13.
Now we consider the density K ( N , n ) for even larger N . The case
N = nQ will be investigated and for any 0 < a < 1/2 we prove that
K ( n a ,n ) has a limit distribution. In fact we have

THEOREM 24.14 (Rkvksz, 1993/B). For any 0 < a < 1 / 2 we have


< x)
lim ~ { ~ ( [ n ~ ] , =n I)- (1 - x)20/(1-2cu) (0 I x I 1).
n+oo

At first we present a few lemmas.

LEMMA 24.12 Let

Then
P{l(x,n) = l} = 1 - 2 a + 0 (____

Proof. It is a trivial consequence of Theorem 20.3.

LEMMA 24.13 Let x and y be two points of iL2 such that

Then
(1- 2 4 2
lim rna(z,y;n) = (24.23)
n+co l-a
270 C H A P T E R 24

Proof. Lemmas 24.6 and 24.12 imply

-
-
2 (1 - 2 a +0 (e
2 - 2 , + 0 ( - - - ) log log n
2

log n

- (1 - 2a)2
1-a
+O (-). (24.24)

Similarly
m2(5,Y;n )
>m1(z - y ; n - q n ) [ m 1 ( x ; q n ) +m1(y;qn) -m:!(x,y;w)l
log log n log log n
log n log n
Consequently

(24.25)

(24.24) and (24.25) together imply (24.23).


The next lemma is an extension of Lemma 24.13.
LEMMA 24.14 Let z 1 , z 2 , .. . ,zk (k = 1 , 2 , . . .) be a sequence in Z2 such
that
na(logn)-P _< J J z- i zjJ1,JJziJJ _< Cna
>
where 0 < a < 112, 0 0 , C 2 1, 1 5 i < j 5 k. Then

Proof. Lemmas 24.5 and 24.12 imply


1-2,+0(-) log log n
log n
mk 5

l - 2 a + o ( - ) log log n
log n
- -M(n).
LARGE COVERED BALLS 271

By induction we obtain

r n k 5 (1-2a+0(!!3$q)k

~b(1-
j=2
(1-f) (2a-O(%)))-'. (24.26)

Similarly

mk 2
(1 - 2 a +0 (e
) M(qn)

1+ (

Consequently

l-2a+o(-) log log n


log n
mk 2
1- (1-i) ( 2 a - 0 ( T ) )

and by induction we obtain Lemma 24.14.

Proof of Theorem 24.14. Let A ( s ,n ) be the set of all possible s-tuples


( X I ,5 2 , . . . , z,) of &(no)with the property

Then

and by Lemma 24.14 we obtain

lim E ( ( K ( n * , n ) ) "=) (1 - 2a)'fi


n+cC
j=1
(1 - (1 - :) 2 ~ ) ~ '
3
272 CHAPTER 24

Consequently we have Theorem 24.14 with a distribution G,(.) satisfying

t) 20)
-1
L 1 x s d G a ( x )= (1 - 2 0 ) ‘ f i (1 - (1 -3
j=1

(24.27)

It is easy to see that the only distribution function which satisfies the
moment condition (24.27) is

G,(z) = 1 - (1 - z)2a/(1--2u).

Hence Theorem 24.14 is proved.

24.5 Completely covered balls in Zd


Theorem 24.1 describes the area of the largest disc around the origin of 2’
covered by the random walk {Sk,k 5 n } . In Z d (d 2 3) the analogous prob-
lem is clearly meaningless since the largest covered ball around the origin
is finite with probability 1. However, one can investigate in any dimension
the radius of the largest ball (not surely around the origin) covered by the
random walk in time n. Formally speaking let

Q(u,N ) = {z : x E Zd, ((z- u ( (5 N }

and R*( n )= R*( n ,d) be the largest integer for which there exists a random
vector u = u(n)E Z d such that Q ( u ,R*(n))is covered by the random walk
at time n, i.e.
t(x,n ) 2 1 for any x E Q ( u ,R * ( n ) ) .
Then we formulate our
T H E O R E M 24.15 (Rkvksz, 1990/B, 1993/B, ErdBs-R&v&z, 1991) Let
d _> 3. Then
10gR*(n) - 1
lim -- a.s.
r ~ - - t o ~loglogn d-2
Before the proof we present a few lemmas.

L E M M A 24.15 For any 0 < a < 1 and L > 0 there exists a sequence
, of the points of Z d such that
~ 1 ~ x 2. ,. .XT

L5 IlXill < L f l (i = 1 , 2,”., T ) ,


LARGE COVERED BALLS 2 73

[[xi- xjII 2 La ( i , j = 1,2,...,T; i # j ) ,


T = KL(l-*)(d-l)
where K = K ( d ) is a positive constant depending on d only.

Proof is trivial.

LEMMA 24.16 Let

where

and define T = T k and xl,x2,.. . , XT as in L e m m a 24.15. T h e n for a n y L


big enough we have

P(Q(0, L ) is covered eventually}


5 P { x i , x2,.. . ,XT are covered eventually} 5 e-(*-').

Proof. Define the sequence ,


a1, a 2 , . . . a k (k = 1 , 2 , . . .) by
Dk+l - Dk+l-i
Qi = Dk+l - 1 (i = 2 , 3 , . . . , Ic).

Assuming that
Dk-Dk-1 Dk-1
< < Dk+l - 1 < Dk+l - 1'
we have
0 < a1 < a2 < . . . < a k < 1 ( k = 1 , 2 , - ..)
and
(CYi+l - a 1 ) ( d - 1) < a i ( d - 2) (i = 1 , 2 , . . . , k )
where Q k + l = 1.
Let q l .,. . ,xiT be an arbitrary permutation of the sequence X I ,. . . , XT.
Consider the consecutive distances

11xi2 - xil 11, IIxi3 - x i z 11, . . . , I I X i , - xiT-l 11.


Assume that among these distances l 1 (resp. 22, resp. , . . . , l k ) are lying
between La' and La2 (resp. L f f 2and La3, resp. , . . . , L a k )and 2L.
2 74 CHAPTER 24

Then (by Lemma 17.8) the probability that the random walk visits the
points xi,,xi,, . . . ,xiT in this given order is less than or equal to

Lal(d-2)

Taking into consideration that the number of those j's ( 1 5 j 5 T ) for


which La' 5 llzj - x,11 < La'+' (where s is a fixed element of the sequence
( 1 , 2 , . . . '7')) is less than or equal to
KL(a;+l-al)(d-1) (2 = 1 , 2 , . . . , 5)'

we get

if L is big enough and we have Lemma 24.16.


In the same way as we proved Lemma 24.16 we can prove the following:
LEMMA 24.17 For any L big enough and u E Z d we have
P{Q(u, L ) is covered eventually} 5 C * L d e - ( T - l )

Dk - 1
<' < Dk+l - 1'
D=-d - 1
d-2'
k is an arbitrary positive integer and C' = C,t a's a positive constant.

Proof of the upper part of Theorem 24.15. Let L = [ ( l ~ g n )with


~]

1
8=8k= +E=- - &) - l + E.
( d - 1 ) ( 1 - 0)
Then T 2 (1ogn)Qwith some $ > 1 and we obtain our statement observing
that
1
lim 6k =
~c+m ( d - 2 ) - ~ ( -d1)
E. +
LARGE COVERED BALLS 275

Proof of the lower part of Theorem 24.15.


At first we prove a few lemmas.
LEMMA 24.18 There exasts a constant K > 0 such that

af n 2 KR2 where c d as the constant of Lemma 17.8.

Proof is essentially the same as that of Lemma 17.8.


Let
L = L(n) = [(logn)('-2)/(d--l)]
and define
71 = 71 + [(1ogn)2/(d-1)],
$1 = inf{k : k > 7 1 , S, = s k } - 71,
72 = 71 + $1 + [ ( l ~ g n ) ~ / ( ' - l ) ] ,
$2 = inf{k : k > 7 2 , S, = s k } - 72,. ..
Clearly, with a positive probability (depending on n) $1 is infinite. How-
ever, we have
LEMMA 24.19 For any 6 > 0 there exists a constant M = M ( 6 ) > 0
such that
max $i 5 M(logn)2/(d-2))2 n-&. (24.28)
p{l<i<L

Proof. Clearly for any N > 0 there exists a constant 0 < p = p ( N) <1
such that
~{lls,,- S,I~ I N(logn)11(d-2)) 2 p > 0.
Observe that by Lemma 24.18 we have
P{$h I M(logn)W - 2 ) )
2 P { $ ~L M(logn)2/(d-2)1 JIS,, - S,IJ 5 ~ ( l o g n ) l l ( ~ - ~ ) )
x P{llS,, - S,l 5 N(logn)1/(d-2)}

provided that M > KN2 where c d and K are the constants of Lemma
24.18. Since $1, $2, . . . are i.i.d.r.v. 's we get
276 CHAPTER 24

which implies (24.28).


Let
A , = A(n) = { max
l<i<L
$i 5 M(logn)2/(d-2)}
and x be an arbitrary element of Z d for which

(24.29)
Then applying again Lemma 24.18 we have

c d
=I-
2(10g n)l-Wd-2)

Hence the conditional probability (given A(n))that x is not covered is less


than or equal to

Consequently the conditional probability that there exists a point for which
(24.29) is satisfied and which is not covered is less than or equal to

Let (Y > 1 then by Lemma 24.19

for any E > 0 where T ( n )= n"-1(logn)-(2/(d-2)+1-E).


Consequently with probability 1 for all but finitely many k there exists
an n between 2k and 2k+1 for which An holds. Given this n the conditional
probability that Rd(n) _< (logn) 1/(d-2)-2Eis less than or equal to

Hence among these n's there are only finitely many for which &(n) 5
(logn)1/(d-2)-2E,i.e. between 2k and 2k+1 there exists an n (if k is large
enough) for which Rd(n) > (logn)1/(d-2)-2E. This implies the lower part
of Theorem 24.15.
LARGE COVERED BALLS 277

Let Vn(d) be the number of steps after the n-th step until the random
walk {Sn>on Z d visits a previously unvisited site. Clearly V,(d) = 1 i.0.
8.5. At the same time Theorem 24.11 suggests that

This conjecture was proved by Dembo et al. (2005/A).


Theorem 24.15 suggests the following

Conjecture 1.

Theorem 24.15 tells us that the path of the random walk in its first n
steps covers relatively big balls. It is natural to ask where these big, covered
balls are located in Z d . For example we might ask how close they can be to
the origin. In fact we want to prove that if Q ( u ,r,) (r, = (logn)1/(d-2)-E)
is covered at time n then u is big.
THEOREM 24.16 A s s u m e that Q(u,r,) is covered at t i m e n. Then
llull 2 exp(logn)1/2.

Proof. By Theorem 24.15 the radius of the largest covered ball at time
exp(3(log n)lI2)is smaller than
< (Iogn)1/(d-2)--E
(3(lOgn)1/2)l/(d--2)+~

By Theorem 19.5 S k !$ Q ( 0 ,exp((1og .)'I2)) if k 2 exp(3(10gn)'/~).Hence


if IluII 5 exp((logn)1/2) then after the time exp(3(10gn)'/~)the random
walk cannot visit the ball Q(u,r,) at all, before this time it cannot cover
the ball.
It is easy to get a much better result than the above one. However, to
get the best possible result seems to be hard.

Question. Let Q ( u ( n ) , R * ( nbe


) ) the largest covered ball and let x, E Z d
be a favourite value, i.e. ((xn,n) = ( ( n ) .Is it true that
xn E &(u(n),R*(n))i.0. as.?

24.6 Large empty balls


The previous Sections of this Chapter gave a description of the size of the
covered or nearly covered balls. This Section is devoted to study the size
of the large empty balls (left empty by a Wiener process).
278 CHAPTER 24

Let W ( t )E Rd (t 2 0, d 2 3) be a Wiener process. We say that the


ball
Q(z,T) = {Y : Y E R d , IIY - $11 I
is left empty by W ( . )forever if

D ( 2 , r ) := Q ( z , T ) n { W ( t ) ,t 2 0) = 0.

Let

be the radius of the largest empty ball in Q(0 ,R). We are interested in
studying the properties of the process { p ( R ) ,R 2 0). Since W ( 0 )= 0,
clearly
R
P(R) I 2' (24.30)

First we give a sharper upper bound than the trivial one of (24.30).
The next four theorems are due to ErdBs-R6vbz (1997).

THEOREM 24.17 For any E > 0


R
p(R) 5 - - R1-€ a.s. (24.31)
2
if R is big enough.
Our next theorem tells us that the upper bound of (24.31) is not very
far from the best possible result.

THEOREM 24.18

(24.32)

Theorem 24.18 tells us that for some R the p(R) will be very big. The
next theorem tells that for some R the p(R) will be much smaller.

THEOREM 24.19 For any E > 0 we have


R
P(R) 5 i.0. a.s. (24.33)
(log log R)lld-€

Now we show that the upper bound of (24.33) is close to the best possible
result.
LARGE COVERED BALLS 2 79

THEOREM 24.20 For any E > 0

(24.34)

if R is big enough and d 2 4. Further


p(R) 2 R(logR)-(l+") as.
if R is big enough and d = 3.
The proof of Theorem 24.17 is based on the following simple Theorem
24.21. In order to formulate it we introduce a few notations.
For any z E Rd with JJzIJ= 1 and 0 < 6 < 1 define the cone K ( z , 6 ) as
follows:
K(x,6)= y : { Y E Rd, (+) w } .
Clearly for any 0 < 6 < 1 there exists a positive integer K = K ( 6 ) and a
sequence xl,x2,. . . , X K such that
xi E Rd, llXill = 1, (i = 1 , 2 , .. . , K )
K
(JK(Xt.i,$) = Rd, K 5 L(1 - (1 - 29)2)+1)/2
i=l
where L is an absolute positive constant.
Let
Ci = Ci(R)= Ci(R,~,29)
= ( 1 ~: ?/ E K ( ~ i , 2 9 )R, E 5 ( Y , z ~5
) R1-')
where
i = 1 , 2 ,..., K , O<~<1/2, R>O.
Now we have
THEOREM 24.21 For any 0 < E < 1/2, 1/2 < 6 < 1,
f K

Note that Theorem 24.21 tells us that for any r big enough W ( t )meets
all frustum of cones & ( R ) (i = 1 , 2 , .. . , K ) .
In case d = 3 a much stronger Theorem was proved by Adelman-
Burdzy-Pemantle (1998). Let f be a strictly positive increasing function
on R+ and let Cf be the thorn
{(z,y,z) E R3 : x 2 + y2 + .z2 2 1 and (x2 +y2)l/' 5 f(lz1)).
Say that the Wiener process W ( . )avoids f-thorns if there is with probability
1 a random set congruent to C f avoided by W .
280 C H A P TE R 24

THEOREM 24.22 If f ( z ) = zexp(-c(logz)1/2) for c > 0 suficiently


small, then W ( . ) does not avoid f-thorns.

24.7 Summary of Chapter 24


In this Section we summarize the most important results of this Chapter
in an inaccurate form.
Let
Q(u,T) = {x E Z d , 1
12- ull 5 T ) ,
c
zEQ(u,r)
I(t(z,n))
K ( u ,T , n ) =
TdWd

where
0 if z = 0,
I ( z )=
1 if z>O
and w d is the volume of the unit ball of Rd
Let d = 2. Then
(9
~ ( ~ , r ,=n1) if T 5 r:) := exp((logn)l/’)
i.e. Q ( 0 , r ) is completely covered if T 5 T ? ) (the exact results are
(24.2-3*), Theorem 24.7, (24.5), (24.6*)),
(ii)

i.e. Q ( 0 ,T ) is “almost” covered if T 5 r t ) (the exact result is Theorem


24.12),

(iii) K(O,r,n) has a nondegenerated limit distribution if r 5 T:) :=


na (0< a < 1/2) (the exact result is Theorem 24.14),
(i.1
sup ~ ( u , r , n=)1 if T 5 r;) := n1/4
UEZZ

(the exact result is Theorem 24.10).


Let d 2 3. Then

(the exact result is Theorem 24.15).


Chapter 25

Long Excursions

25.1 Long excursions in Z2


In Section 12.1 we have seen that the length of the longest excursion up to
n in Z 1 for some n can be nearly as big as n (cf. Theorem 12.1). At the
same time for some other n it will be about nlloglogn only but it cannot
be smaller than this (cf. Theorem 12.3). We also studied the length of the
second (third, fourth,...) longest excursion and we have seen that if the
length of the longest excursion is n/ log log n only then the length of the
second, third,... longest excursion is about the same (cf. Theorem 12.4).
As a consequence of this we obtained that the sum of the length of the
loglogn longest excursions is nearly n (cf. Theorem 12.5).
Now we intend to investigate the analogue questions in Z2.
Let p l , p2,, . . be the consecutive return times of the planar random walk
to the origin (cf. Notation 6 ) . Put T k = P k - P k - 1 (the length of the Ic-th
excursion). Now let @(n)be the last return to the origin before time n i.e.

Denote by
Mp 2 M p 2 .. . 2 M p > n ) + l )
the order statistics of the sequence

7 1 72 , . . . > T<(o,n)n - @I(.).


)
(25.1)

THEOREM 25.1 (Csaki-RWsz-Rosen, 1998).

M t l ) + Mi2)
lim =1 as.

Proof. Let
n
1 logn
k=O k=l

281
282 CHAPTER 25

(cf. Lemma 17.2). Define nj by g(nj) = j . Then clearly

O l j - g +
(7') <Clogj

Thus rc*(nj)L: 1 a.s. for all but finitely many j . Now take nj 5 n 5 nj+l.
Since
"n) I "j+l)
and
nj(S(nj))-2 I n ( g ( n ) ) - 2< I nj+1
we obtain that
.(n) 5 K*(nj) 5 1.
Since <(O,n) 5 N ( n ) (cf. Theorem 20.4) there are no more than N ( n )
excursions before time n. Thus the sum of those elements of the sequence
(25.1) which are no larger than n(g(n))-zis bounded by

Hence the fact that ~ ( n5)1 implies the Theorem.

Remark 1. As we told in case d = 1 the number of excursions required to


nearly cover the time interval [0, n]is between 1 and loglogn (cf. Theorems
12.1 and 12.5). In case d = 2 Theorem 25.1 tells us that the same number
is 1 or 2. However Theorem 25.1 does not imply straight that for some n
one excursion is enough to nearly cover the time interval [0, n ] . This follows
from:
LONG EXCURSIONS 283

THEOREM 25.2 CsaK-Revesz-Rosen, 1998). Let

X = {h(x) = h ( m l , r n 2 , 7 ; ~: ) O 5 ml < 7712 5 1, O 5 T < I}.


Then the set of the limit points of the sequence

as equal to 31 a.s.

Theorem 25.1 easily implies the following

Consequence 1. pn+l - pn is either much smaller or much bigger than


Pn.
In order to give a more accurate form of Consequence 1, let {a,} and
{Bn} be sequences of positive numbers with

Further let

, bn = exp (i) ((1+ )


n
an = exp an (z) , Cn = exp E)Pn
(E > 0).

Finally let
T,, = Pn+l - P n
Pn
Then we have

THEOREM 25.3 (Csaki-Ri.vi.sz-Shi, 2001/B).

6) Tn 4 (a;', a n ) a.s. for all but f i n i t e l y m a n y n,


(ii) Tn E (b;',~;') i.0. a.s.
(iii) Tn E (cn,bn) 2.0. a.s.
284 CHAPTER 25

Example 1.

i.e.

satisfy the above conditions.


The next two Theorems give the limit distribution of T,.
THEOREM 25.4 (Revesz, 2000).

uniformly for o < z < n317.


Theorem 25.4 tells us that T,, with a big probability, is very small.
However, Theorem 25.3 claims that T, occasionally is very large. In our
next theorem the limit distribution of T, is evaluated when T, is large.
THEOREM 25.5 (Revksz, 2000). For any 0 < E < 1/7 we have
03

{
P T, > exp (?)
Z
I T, > l} = 7r2
uz
-e--Rudu
U+Z
+

uniformly for o 5 z < 72317.

25.2 Long excursions in high dimension


Since in Z d ( d 2 3) a random walk returns to the origin only finitely many
times, Theorems 25.1, 25.2 and 25.3 cannot hold true in higher dimension.
However if we consider the longest excursion away from some 2 E Z d com-
pleted by the time n , then it can be long. For i 2 0, define the random
variable x(i) by

Si+j # Si, j = 1 , 2 , . .. , ~ ( i-) 1, Si+x(i) = Si.


(If such x(i) does not exist, we set x(i) := m.) Let

R(n) := max{x(i) : i + x(i) 5 n}


which in words denotes the length of the longest completed excursion (away
from any point) a t time n.
LONG EXCURSIONS 285

THEOREM 25.6 (CsAki-Revksz-Shi, 2001/B) Let d 2 3. With probabil-


ity one,

On the proof, here we only mention that its main ingredient is Theorem
23.1.
This page intentionally left blank
Chapter 26
Speed of Escape
Theorem 19.5 on the rate of escape suggests that the sphere {z : 11z11 = R }
is crossed about R times by the random walk if R is big enough and d 2 3.
In fact if \IS,(( = @ for every n then CzEZ(R) ((x,CG) = O ( R ) where

Z ( R ) = {Z : z E Zd, IIIzlI - R) 5 1).


Introduce the following notations:

0 if ((z,n ) = 0 for every n = O,1,2,. . . ,


J(z) =
1 otherwise
and
m)= c Jb),
zEZ(R)

i.e. J ( z ) = 1 if 2 E Z d is visited by the random walk and 8(R) is the


number of points of Z(R) visited eventually by the random walk. On the
behaviour of 8(R)we have the following:

Conjecture 1. For any d 2 3 there exists a distribution function H ( z ) =


H d ( z ) for which H ( 0 ) = 0 and

Unfortunately we cannot settle this conjecture but the analogous ques-


tion for a Wiener process W ( t )= {Wl(t),Wz(t), . . . , Wd(t)}(d 2 3) can
be solved. In order t o present the corresponding theorem we introduce the
following
Definition. W ( t )is crossing the sphere {z : llxll = R } 0 = 0 ( R ) times
if B(R) is the largest integer for which there exists a random sequence
0 < ( ~ 1= ai ( R ) < pi = 81( R ) < a2 = a2(R) < /32 = /32(R).. . < 00 =
as(R) < ,be = be(R) < 00 such that

287
288 CHAPTER 26

(8(R))-' will be called the speed of escape in R.

THEOREM 26.1

The proof is based on the following:

LEMMA 26.1

where
x = p ( R ) (1 - (&) d-2) 7

and

where

B ( R ,t )
=(inf{s : s > t , IlW(s)ll = R - 1) > inf{s : s > t , IIW(s)ll = R + l}.

R e m a r k 1. Note that the last formula for p ( R ) comes from Lemma 18.1.
By (18.4)

Proof. Clearly we have


SPEED O F ESCAPE 289

P{B(R) = 2 )

+ d R ) ( RL +) 1d - 2 P ( R )(1 - ( j & ) d - 2 )

where q ( R ) = 1 - p ( R ) . Similarly

P{B(T) = k }

x (1 - (&) "')
Hence we have Lemma 26.1.
Observe that
1 1 1 R+l
EB(R) = - = -
N

P(R)

Since p ( R ) -+ 1 / 2 ( R -+ ca) we have

EB(R) - 2
lim -- - (26.1)
R-+w fi d-2'
Lemma 26.1 together with (26.1) easily implies Theorem 26.1.
Studying the properties of the process {B(R),R > O} the following
question naturally arises: does a sequence 0 < R1 < Rz < . . . exist for
which
lim R, = 00 and B(Ri) = 1 i = 1 , 2 , . . .?
n+w
The answer to this question is affirmative. In fact we prove a much stronger
theorem. In order to formulate this theorem we introduce the following
290 CHAPTER 26

Definition. Let $ ( R )be the largest integer for which there exists a positive
integer u = u(R) 5 R such that

B(k) = 1 for any u 5 k 5 u + $ ( R ) .


It is natural to say that the speed of escape in the interval ( u ,u $ ( R ) )is +
maximal.
THEOREM 26.2
log log R
$ ( R )L log2 i.0.a.s.

Proof. Let
log log R
f ( R )= log2 7

and
1 d-21
+
P { A ( R ) A ( R S ) }= --
loglogR logR 2R
log(R + S ) -
log 2
if loglogRllog2 < S = o(R). In the case S 2 O(R)the events A ( R ) and
+
A ( R S ) are asymptotically independent. Hence

and for any E > 0 if n is big enough we have

2 2
R = l S=[log log R / log 21
P { A ( R ) A ( R + S )5
} (1)
d-2
2
(loglogn)2(1+ E )
SPEED OF ESCAPE 291

which implies Theorem 26.2 by Borel-Cantelli lemma.

Conjecture 2.
lim +(R) - -- 1
as.
R-+m log log R log 2
Theorem 26.2 clearly implies that B(R) = 1 i.0. a s . It is natural to ask:
how big can B(R) be? An answer to this question is
THEOREM 26.3 For any E > 0 we have
B(R) 5 2(d - 2)-'(1+ E)Rlog R a.s.

if R is big enough and

B(R) 2 (1 - E)RlogloglogR i.0. a.s.

Since this result is far from the best possible one and the proof is trivial
we omit it.

-
Remark 2. Conjecture 1 suggests that 8(R) R. Instead of investigating
the path up t o co consider it only up to p1 = min{k : k > 0, Sk = O}. Tak-
ing into account that P(p1 = GO} > 0 if d 2 3 we obtain C2EZ(R) [(z,p 1 )
R with positive probability. Investigating the case d = 1 by Theorem 9.7
-
+
we get EE2cz(Rl<(z,PI) = E(E(Rp1) <(-&PI)) = 2 for any R E zl.
We may ask about the analogous question in the case d = 2. By Lemma
24.1 we obtain

Lemma 18.1 suggests that the probability of returning to the origin from

that EzEZ(R) -
Z ( R - 1) before visiting Z(R) is O(R-l(logR)-l). Hence we conjecture
WPI) OW); for example,

for any d 2 2.
This page intentionally left blank
Chapter 27

A Few Further Problems

27.1 On the Dirichlet problem


Let U be an open, convex domain in R2 which is bounded by a simple
closed curve A. Suppose that a continuous real function f is given on A.
Then the Dirichlet problem requires us to find a function u = u ( x ,y) which
(i) is continuous on U + A,
(ii) agrees with f on A ,
(iii) satisfies the Laplace equation

d2u 8%
-+--=I).
8x2 dy2

A probabilistic solution of this problem is the following. Let { W ( t )t, 2


0) be a Wiener process on R2 and for any z E U define W,(t) = W ( t ) z . +
Further, let 0, be the first exit time of W,(t) from U , i.e.

(T, = min{t : W z ( t )E A} (2 E U).

Then we have
THEOREM 27.1 The function

4 2 ) =Ef(0Z)
is the solution of the Dirichlet problem.
The proof is very simple and is omitted. The reader can find a very
nice presentation in Lamperti (1977), Chapter 9.6.
Here we present a discrete analogue of Theorem 27.1. Instead of an
open, convex domain U we consider a sequence U, ( r = 1 , 2 , . . .) of domains
defined as follows.
Consider the following sequences of integers

293
294 CHAPTER 27

(1) bi+l < ci, ci+l > bi (i = 1 , 2 , . . . ,n, - 2),


l ai > ar, ci - bi > CW,
(2) ~ i + -

with some a > 0 and n, = 2,3,. . .. Now let

n--1

i=l

Condition (1) implies that U, is connected. Condition (2) has only some
minor technical meaning. Let A, be the boundary of U, and define a
“continuous” function f,(.) on the integer grid of A,, where by continuity
we mean:
For any E > 0 there exists a 6 > 0 such that If(z1) - f(z2)l 5 E if
llzl - z2ll 5 6r where z1,zz E A,Z2.
Now we consider a random walk {Sn;n = 0 , 1 , 2 , . . .} on Z2 and for any
+
z E (U, A,)Z2 we define

s?)=s,+z ( n = 0 , 1 , 2 ,...).

Let uz be the first exit time of 5’2)


from U,, i.e.

cz = min{n : S?) E A,}.

We wish to prove that


u ( z ) = Ef(S2))
is the solution of the discrete Dirichlet problem, meaning that
+
(i) u is “continuous” on (U, Ar)Z2, i.e. for any E > 0 there exists a
+
6 > 0 such that if z1,za E (V, A,)Z2 and IJz1- 2211 5 br, then
I.(Zl) - +2)1 F E,
(ii) u agrees with f on ArZ2,

(iii) u satisfies the Laplace equation, i.e.


A F E W F U R T H E R PROBLEMS 295

(ii) is trivial. (iii) follows from the trivial observation that


1
u(2,Y) = -(.(.
4
+ 1,y) + u(z - 1,y) + u(z,y + 1) + u(z,y - 1))
if satisfies the condition of (iii).
(2, y)
To see (i) we present a simple
LEMMA 27.1 For any E > 0 there exists a 6 > 0 such that if
z E UrZ2, q E ATZ2, - 411 5 ST
then
- 41) 5 E T } 2 1 - E .
P(IJS(")((T,)
Consequently
I E f ( S ( Z ) ( 4 )- f ( s ) l 5 E*.
Proof is simple and is omitted.
In order t o prove (i) we have to investigate two cases:
( a ) Z1, z2 E uTz2,
(p) one of ~ 1 z2
, is an element of A,Z2 and the other one is an element
of urz2.
In case (p) our statements immediately follow from Lemma 27.1. In
case ( a )assume that oZ15 oZzand observe that
= JIZZ - 211) _<
IJS(Z')(o,,) - S("2)((TZ,>II 6T.

Since S("2)(aZ,) applying Lemma 27.1 with q =


= S(S'z2'("zl))(~s(.a)(rr.l))
S(zl)(~z l ) z = S("2)(aZ,)we obtain (i).
and
Remark 1. Having the above result on the solution of the discrete Dirichlet
problem, one can get a concrete solution by Monte Carlo method. In fact
t o get the value of u(., .) in a point z0 = ( x o ,yo) E U,. observe the random
walk starting in zo till the exit time utOand repeat this experiment n times.
Then by the law of large numbers
n
(27.1)

where S1, S2, . , . are independent copies of a random walk. Hence the aver-
age in (27.1) is a good approximation of the discrete Dirichlet problem if n
is big enough. A solution of the continuous Dirichlet problem in some zo or
in a few fix points can be obtained by choosing r big enough and the length
of the steps of the random walk small enough comparing to the underlying
domain.
296 CHAPTER 27

27.2 DLA model


Let A1 C A2 c . . . be a sequence of random subsets of Z2 defined as follows:
A1 consists of the origin, i.e. A1 = {0},
A2 = A1 + y2 where y2 is an element of the boundary of A1
obtained by the following chance mechanism. A particle is released at 00
and performs a random walk on Z2. Then y2 is the position where the
random walk first hits the boundary of A1.
The boundary of a set A c Z 2 is defined as

d A = {y : y E Z 2 and y is adjacent to some site in A, but y 6 A}.


For example, dA1 = ((0, l ) ,( l , O ) , ( - l , O ) , (0, -l)}.
+
Having defined A,, An+l is defined as A,+l = A, yn+l where yn+1
is the position where the random walk starting from cc first hits dA,.
In the above definition the meaning of “released at co” is not very clear.
Instead we can say: let

R, = inf{r : r > 0, A, c Q ( r ) = {x : 5 r}).


((z((

Then instead of starting from infinity the particle might start its random
walk from (RE,O)(say). It is easy to see that the particle goes round the
origin before it hits A, (a.s. for all but finitely many n). This means that
the distribution of the hitting point will be the same as in case of a particle
released at 00.
Many papers are devoted to studying this model, called Diffusion Lim-
ited Aggregation (DLA). The reason for the interest in this model can be
explained by the fact that simulations show that it mimicks several physical
phenomena well.
The most interesting concrete problem is to investigate the behaviour
of the “radius”
r , = max(IIx1I : x E A,}.
Trivially r , 2 (n/n)’/2and it is very likely that r , is much bigger than
this trivial lower bound. Only a negative result is known saying that r n is
not very big. In fact we have

THEOREM 27.2 (Kesten, 1987). There exists a constant C >0 such


that
~ i m s u p n - ~ / ~5r ,c a.s.
n+oo
A F E W F U R T H E R PROBLEMS 297

The proof of Kesten is based on estimates of the hitting probability of


dA,. He proved that there exists a C > 0 such that for any y E dA, we
have
P{Y,+l = y} 5 CT,1/2. (27.2)
In order t o get a lower estimate of T , we should get a lower estimate of
the probability in (27.2) a t least for some y E dA,. Auer (1989) studied the
question of how one can get the lower bounds of the hitting probabilities
of some points of the boundaries of certain sets (not necessarily formed by
a DLA model). He investigated the following sets:

B1 = { ~ - T , O ) , ( - T + l , O ) , . . . , ( T - LO),(T,O)},
Bz = B1 + ((0, -1, (0, --T + I), . , ( 0 , r - I), ( O , T ) } ,
B3 = {x = (x1,22) : 1x1) + 1221 = T } .
Consider the point y = ( T , 0). Then the probability that the particle coming
from infinity first hits y among the points of dBi(i = 1,2,3) is larger than
or equal to
C T - ’ / ~ if i = 1 , 2
and
cT-2/3(i0gr)-1/3 if i =3
with some C > 0.

27.3 Percolation
Consider Z 2 and assume that each bond (edge) is “open” with probability
p and “closed” with probability 1 - p . All bonds are independent of each
other. An open path is a path on Z2 all of whose edges are open.
One of the main problems of the percolation theory is to find the prob-
ability O ( p ) of the existence of an infinite open path. Kesten (1980) proved
that
=O if p < 1/2,
> 0 if p > 1/2.
The value 1/2 is called the critical value of the bond percolation in 2’.
An analogous problem is the so-called site percolation. In site percola-
tion the sites of Z2 are independently open with probability p and closed
with probability q = 1- p . Similarly as in the case of the bond percolation
a path of Z 2 is called open if all its sites are open and we ask the probability
O * ( p ) of the existence of an infinite open path. The critical value of the
site percolation in Z 2 is unknown, but T6th (1985) proved
O * ( p ) = 0 if p < xi N 0,503478
298 CHAPTER 27

where zo is the root lying between 0 and 1 of the polynomial

3%' - 8x7 + 6x6 + x4 - 1.


We call the attention of the reader to the survey of Kesten (1988) on
percolation theory.
And God said, “Let there be lights in the firma-
ment of the heavens to separate the day from the
night; and let them be for signs and for seasons
and for days and years.”
The First Book of Moses

111. RANDOM WALK I N RANDOM


ENVIRONMENT
This page intentionally left blank
Not at ions

1. & = {. . . ,E-2, E-1, Eo,E l , E2, . . .} is a sequence of i.i.d.r.v.’s satis-


fying p < Ei < 1 - p with some 0 < ,B < 1/2 called environment.
2. { 0 1 , F1, PI},{ 0 2 , . & , PE},(0,F,P} (see Introduction).

4. TO= 0, Tn = Vi + V, + . . . + V,, T-, = V-1 + V-2 + . . . + V-,


( n = 1,2,.. .).
5.
if b = a,
if b = a + l ,
D(a,b) = b-a-1 j
It nUa+i if b 2 a+2,
j=1 i=l

6.
1
-- D(O, n - 1)
=I- 1 + + UIU2 + . . . -I-
Ul .
U l U 2 . . Un-2
D*(n) - - D ( 0 ,n ) 1 + + + ... +
Ul UlU, U l U 2 . . . un-1

= (1 + u;:l + (u,-lun-2)-l + ... + (Un-1un-2.. .


i.e.

7.

(TI = 1,2,.. .). Caution: D ( n ) = D(O,n),however, D ( - n ) # D(-n,O).

301
302 III. R A N D O M W A L K IN R A N D O M E N V I R O N M E N T

8. I ( t ) is the inverse function of D ( n ) , i.e.

I ( t ) = 5 if D ( k ) 1. t < D ( k + I),
I ( - t ) = k if D ( - k ) 1. t < D ( - k - 1) (t 2 1, k = 1 , 2 , . . .).

9. Ro, R1, . . . is a random walk in random environment (RWIRE) (see


Introduction).
10. p(a,b,c) (see Lemma 30.1).
Chapter 28

Introduction
The sequence {S,} of Part I was considered as a mathematical model of
the linear Brownian motion. In fact it is a model of the linear Brownian
motion in a homogeneous (non-random) environment.
We meet new difficulties when the environment is non-homogeneous.
It is the case, for example, when the motion of a particle in a magnetic
field is investigated. In this case we consider a random environment in-
stead of a deterministic one. This situation can be described by different
mat hematical models.
At first we formulate only a special case of our model. It is given in the
following two steps:

Step 1. (The Lord creates the Universe). The Lord visits all integers of
the real line and tosses a coin when visiting i (i = O , * l , f 2 , . . .). During
the first six days He creates a random sequence
E = {. . . ,E-2, E-1, Eo, El, E2,. . .}
where Ei is head or tail according the result of the experiment made in i.

Step 2. (The life of the Universe after the Sixth Day). Having the sequence
{. . . , E-2, E-1, Eo, E l , E2, . . .} the Lord puts a particle in the origin and
gives the command: if you are located in i and Ei is head then go t o the
left with probability 3/4 and to the right with probability 1/4, if Ei is tail
then go to the left with probability 1/4 and to the right with probability
3/4. Creating the Universe and giving this order to the particle “God rested
from all his work which he had done in creation” forever.
The general form of our above, special model can be described as follows:

Step 1. (The Lord creates the Universe). Having a sequence


I = {. . . ,E-2, E_I,Bo,El, Ez,. . .} of i.i.d.r.v.’s with distribution
P{Eo < z} = F ( z ) , F ( 0 ) = 0, F(1)= 1,
the Lord creates a realization & of the above sequence. (The random se-
quence {. . . ,E-2, E-1 , Eo , E l , E2, . . .} and a realization of it will be denoted
by the same letter &.) This realization is called a random environment (RE).

Step 2. (The life of the Universe after the Sixth Day). Having an RE & the
Lord lets a particle make a random walk starting from the origin and going

303
304 CHAPTER 28

one step to the right resp. to the left with probability EO resp. 1 - Eo. If
the particle is located at II: = i (after n steps) then the particle moves one
step to the right (resp. to the left) with probability E; (resp. 1-I?;). That
is, we define the random walk {Ro,R1,. . .} by Ro = 0 and
+
P&{Rn+l= i 1 I R, = i, Rn-l, R n - 2 . . . ,R1}
= 1 - P&{R,+l = i - 1 I R, = i,Rn-l, Rn-z,. . . ,Ri} = E;. (28.1)
The sequence {R,} is called a random walk in RE (RWIRE).
Now we give a more mathematical description of this model as follows.
Let ( 0 1 , .FI,P I } be a probability space and let
E = E(w1)
= {. . . ,E-1 = E-l(wl),Eo = Eo(wl),El = E i ( ~ i ) .,.}
.
(w1 E 01) be a sequence of i.i.d.r.v.’s with P1{E1 < x} = F ( z ) ( F ( 0 ) =
1 - F(1)= 0).
Further, let {0z,.Fz} be the measurable space of the sequences w2 =
{ E ~ , E Z , .. .} where E ; = 1 or E ; = -1 (i = 1 , 2 , . . .) and .Fzis the natural
cr-algebra. Define the r.v.’s Y1,Yz,... on 0 2 by yZ(wz) = ~i (i = 1 , 2,...)
.+
and let Ro = 0, R, = Y1+ Yz +. . Y, ( n = 1,2, . . .). Then we construct a
probability measure P on the measurable space { R = 01 x RZ, F = F1 x Fz}
as follows: for any given w1 E R1 we define a measure P,, = P&(,,) = P E
on Fz satisfying (28.1). (Clearly (28.1) uniquely defines P E on Fz.)Having
the measures P E ( ~ (w1 , ) E 01) and P1 one can define the measure P on F
the natural way.
Our aim is to study the properties of the sequence {R,}. In this study
we meet two types of questions.
(i) Question of the Lord. The Lord knows w1, i.e. the sequence E ; or in
other words, He knows the measure P E and asks about the behaviour
of the particle in the future, i.e., He asks about the properties of the
sequence { R n }given &.
(ii) Question of the physicist. The physicist does not know w1. Perhaps
he has some information on F , i.e. he knows something on P I . He
also wants to predict the location of the particle after n steps, i.e.
also wants to describe the properties of the sequence {R,}.
A typical answer to the first type of question is a theorem of the fol-
lowing type:
THEOREM 28.1 There exist two sequences of F1-measurable functions
f,( 1 ) - f,( 1 )( E ) 5 f?) = f?)(&)such that
a.s. (28.2)
INTRODUCTION 305

f o r all but finitely m a n y n, i.e.

P ~ { f t ) ( f5) rnax
Osksn
(&I 5 fF)(f)for all but f i n i t e l y m a n y n } = 1.
Since the physicist does not know the environment & he will not be satisfied
with an inequality like (28.2). However, he wants to prove an inequality
like

THEOREM 28.2 There exist two deterministic sequences ail) 5 an(2)


such that
a p 5 fp 5 f?' 5 a?) a.s. (PI) (28.3)
f o r all but finitely m a n y n.
Having inequalities (28.2) and (28.3) the physicist gets the following answer
to his question:

THEOREM 28.3 There exist t w o deterministic sequences ail) 5 an(2)

s u c h that
a;) 5 max ( R ~5 Ia:) a s . (P) (28.4)
Oskln

for all but finitely m a n y n. Equivalently

~ { a p5)Ornax
lksn
I& 1 5 a?) f o r all but f i n i t e l y m a n y n }

= P1{P,={&)5 max l&l 5 ai2)forallbZLtfiniteZymanyn} = 1) = 1.


OlkLn

Remark 1. The exact forms of Theorems 28.1, 28.2 and 28.3 are given in
Theorems 30.6, 30.8 and 30.9 where the exact forms of a:), a?),):f fp),
are given.

Remark 2. In the special case when

Pl{EO = l/2} = F(1/2 + 0) - F(1/2) = 1,


the RWIRE problem reduces to the simple symmetric random walk prob-
lem.
This page intentionally left blank
Chapter 29

In the First Six Days


In this chapter we study what might have happened during the creation of
the Universe, i.e. the possible properties of the sequence E are investigated.
The following conditions will be assumed:
(C.l) there exists a 0 < /3 < 1/2 such that P{p < EO< 1 - p } = 1,
(C.2)
00

EIVo = l m z d P l { V 0 < z} =
X

where F ( x ) = Pl(E0 < z}, VO= log VOand UO= (1- Eo)/Eo,

Remark 1. For a simple symmetric random walk (i.e. P1{Eo = 1/2} = 1)


we have P1{Vo = 1) = Pl(V0 = 0) = 1 and consequently (C.l) and (C.2)
are satisfied; however, (C.3) is not satisfied since ElV: = o2 = 0.
We also mention that if (C.1) and (C.2) hold and E1V: = o2 = 0 then
Pl{Eo = 1/2} = 1.
Remark 2. Most of the following lemmas remain true replacing (C.l) by
a much weaker condition or omitting it. Here we are not interested in this
type of generalizations.
L E M M A 29.1

limsupT, = limsupT-, = -liminfT, = -1iminfT-, = 0;) as. (PI).


n+m n+m n+ 00 n+m
(29.1)
If w e a s s u m e (C.l) a n d (C.3) but instead of (C.2) w e a s s u m e t h a t El% =
m f 0. T h e n

lim T, = lim T-, = (sign m)m a s . (PI) (29.2)


n+m 71-00

+ +
where T, = V1 . . . V,, T-, = VW1 + . . . + V-,, V, = logUj, Uj =
(1 - Ej)/Ej a n d TO= 0.

307
308 CHAPTER 29

Proof. (29.1) is a trivial consequence of the LIL of Hartmann and Wintner


(cf. Section 4.4), (29.2) follows from the strong law of large numbers.
LEMMA 29.2
lim D ( n ) = 0;) as. (PI) (29.3)
n+oo
(cj. Notation 5).

Proof. Since
D(n) I + U ~ + U ~ U ~ + . . . + U ~ U ~ . =. e. oU+~e T
- ~l + e T z + . . . + e T n - l ,
(29.4)
(29.3) follows from Lemma 29.1.
By (29.4) we have
exp( max Tk) 5 D ( n ) 5 nexp( max Tk) (29.5)
O<k<n-l O<k<n-l

and the LIL implies


LEMMA 29.3 For any E > 0 and for any p = 1 , 2 , . . . we have
max Tk 5 (1+ ~ ) a ( 2 n l o g l o g n ) ~ / ~
lsksn
a.s. (PI) f o r all but finitely many n, (29.6)
max
l<kLn
Tk > (1- ~ ) u ( 2 n l o g l o g n ) ~i.0.
/ ~ a s . (PI), (29.7)

max Tk 5 n’/2(lognloglogn...log,n)-’ i.0. a s . (PI), (29.8)


lsksn

a s . (PI) f o r all but finitely many n. (29.9)


By (29.5) we also get
~ ( n5 )exp{(l+ ~ ) 0 ( 2 n l o g l o g n ) ~ / ~ }
as.(PI)f o r all but finitely many n , (29.10)
D(n) >_ exp{(l- ~ ) a ( 2 n l o g l o g n ) ~ /i.0.
~ } a.s. ( P I ) , (29.11)
D ( n ) 5 exp{n1~2(lognloglogn~~~log,n)-1}i.0. a s . (PI), (29.12)
D ( n ) >_ exp{n1/2(logn.loglogn...(log,n)l+E)-’}
as. (PI) f o r all but finitely many n. (29.13)
Replacing the maxl<k<n by max_,<k<-l the inequalities (29.6) - (29.9)
remain true. Replacing D(n) by or-;) in (29.10) - (29.13) they remain
true.
logD*(n)
lim sup = u a.s. (PI), (29.14)
n+co @n log log n
IN T H E FIRST SIX DAYS 309

log D * ( k )
lim inf max
n+m Osksn fi d&= or/& as. (Pi), (29.15)

D * ( n )? 1, (29.16)
D *( n)5 n i.0. a s . (PI). (29.17)

Proof. Inequalities (29.6)-(29.13) are clear as they are. The following


simple analogue of (29.5),

(29.18)
implies (29.16) and (29.17).
In order to get (29.14) and (29.15) approximate the process {Tk, k 2 0)
by a Wiener process { o W ( t ) ,0 5 t < 00). By Theorem 10.2 the process

- min ( W ( T )- W ( t ) )
O<t<T

is identical in distribution to the process {IW(t)l,t 2 O}. Hence the LIL


and the Other LIL imply (29.14) and (29.15).
LEMMA 29.4 For any E > 0 and for any p = 1 , 2 , . . . we have
I+€ 2
I ( t ) I (1% It1log 1% It1. . .log,-, ItKlogp Itl) )
a s . (PI)if It1 i s big enough, (29.19)
I ( t ) 2 (log It[loglog It1 . . .logp lt1)2 i.0. U.S. (Pi), (29.20)
1 + E log2 It1 .
I ( t ) I -- 2.0.U.S. ( P I ) , (29.21)
2a2 log, It1
1 - E log2 It(
I ( t ) 2 -- a s . (Pl)if It1 is big enough. (29.22)
2a2 log, It\

Proof. (29.19), (29.20), (29.21) (resp. (29.22)) follows from (29.13),


(29.12), (29.11) (resp. (29.10)).
LEMMA 29.5

(29.23)
(29.24)

(29.25)
310 CHAPTER 29

(' ul(t) )-l


-t D * ( I ( t ) )
5 D ( I ( t ) )5 t , (29.26)

D(n + 1) 5 -P1D ( n ) + 1, (29.27)

I(Xt + 1) _> I ( t ) + 1 if x > 1, (29.28)


P
where ,B is the constant of (C.1).

Proof. (29.23), (29.24), (29.25) follow immediately from the definitions.


(29.23) and (29.25) combined imply

This, in turn, implies (29.26). (29.27) follows from (C.1). (29.28) follows
from (29.23) and (29.27).
Chapter 30

After the Sixth Day

30.1 The recurrence theorem of Solomon


THEOREM 30.1 (Solomon, 1975). Assuming conditions (C.l), (C.2),
(C.3) we have

P { R , = 0 Lo.} = P1{P&{R, = 0 id.} = 1) = 1.


Assuming (C.l), (C.3) and ElVo # 0 we have
PI{P&{Rn= 0 i.0.) > 0 ) = 0.

Remark 1. The statement of the above Theorem can be formulated as


follows: with probability 1 ( P I ) the Lord creates such an environment in
which the recurrence theorem is true, i.e. the particle returns to the origin
i.0. with probability 1 (PE).Before the proof of Theorem 30.1 we present
an analogue of Lemma 3.1.
LEMMA 30.1 Let

p(a1b1 c)
= PE{min{j : j > m, Rj = a} < min{j : j > m, Rj = c} 1 S, = b}

( a 5 b _< c), i.e. p(a,b,c) = p(a,b,c,&) is the probability that a particle


starting from b hits a before c given the environment E . Then

Especially
1 1
p ( 0 , 1,n) = 1 - - and
D(n>
p ( 0 , n - 1,n) = -
D'(n)'

Proof. Clearly, we have

311
312 CHAPTER 30

p ( a , b, C) = Ebp(a, b + 1, + (1- Eb)p(a,b - 1,c).


C)

Consequently,
1 Eb
p(a,b + 1,C) - p ( ~b,,C) = -Eb
-
( p ( a ,4 c> - p ( a , b - 1,c>>.

By iteration we get
p(~,b+ ) ( ~ , b , c )= UbUb-l’.’Ua+l(p(a,a+l,C)
1 , ~- p -P(a,a,c))
= UbUb-1 . . . Ua+l(p(a,a+ 1,c) - 1). (30.1)

Adding the above equations for b = a , a + 1,.. . ,c - 1 we get


-1 = p ( a , c, c ) - p ( a ,a , c ) = D ( a ,c ) ( p ( a ,a + 1,c) - l),
i.e.
1
p ( a , a + 1,c) = 1 - ___ (30.2)
D ( a ,c ) .
Hence (30.1) and (30.2) imply

Adding these equations we obtain


p(a,b + 1,c) - 1= p ( a , b + 1,c) - p ( a , a , c )

- - D ( a ,b + 1)
D ( a ,c )
Hence we have the Lemma.
Consequence 1.

p(O,l,n;&)= lim
n+cc ( 1- -
Din)) = ’} =”
(30.3)

P{ lim p ( - n , - 1 , O ; f ) = 0) = 1. (30.4)
n+cc
(30.3) follows from Lemma 30.1 and (29.3). In order to see (30.4) observe
D ( - n , -1) -
1
p(-n, - 1 , O ) = 1-
D(-n,O) D(-n)
and apply (29.13) for D ( - n ) .
The following lemma is a trivial analogue of Lemma 3.2, the proof will
be omitted.
A F T E R THE SIXTH DAY 313

LEMMA 30.2 For any -co < a _< b < co we have

P{liminf Rn = a} = P(1imsup R , = b } = 0.
n-+m ,--too

P r o o f of Theorem 30.1. Assume that R1 = 1, say. Then by Lemma 30.2


the particle returnes t o 0 or it is going to +co before returning. However,
by (30.3) for any E > 0 there exists an no = no(&, I )such that p(O,1,n ) =
1- l/D(n) 2 l - - ~ if n 2 no. Consequently the probability that the particle
returns to 0 is larger than 1 - E for any E > 0 which proves the Theorem.

30.2 Guess how far the particle is going away


in an RE
Introduce the following notations:

M - ( n ) = - min Rk,
Olksn
~ ( n= )max{M+(n), h f - ( n ) }= max IRkI,
O<k<n
Po = 0,
p1 = min{k : k > 0, Rk = 0},
... ... ...
pj+l = min{k : k > p j , Rk =0},
((k,n)=#{l:O<l < n , R ~ = k } ,
an) = m;x€(k,n),
v(n)= #{i : 0 < i I: n - 1, Rp;+l = 1).

Observe that ( ( 0 ,p n ) = n.
Our aim is t o study the behaviour of M ( n ) . Especially in this section a
reasonable guess will be given.
Consider the simple environment when

Note that conditions ( C . l ) , (C.2) and ((3.3) are satisfied. Note also that in
the environment E = {. . . ,3/4,1/4,3/4,1/4,3/4,. . .} the behaviour of the
314 CHAPTER 30

random walk is the same as that of the simple symmetric random walk.
For example, it is trivial to prove that

limsup b,M(n) = 1 a s .
n+m

if E is the given environment and b, = (2nloglogn)-1/2.


One can guess that since environment {. . . , 3 / 4 , 1 / 4 , 3 / 4 , 1 / 4 , . . .} is
nearly the typical one, M ( n ) will be practically n1/2 in most environments.
This way of thinking is not correct because we know that in a typical en-
vironment there are long blocks containing mostly 3/4's and long blocks
containing mostly 1/43.
Assume that in our environment

which is a typical situation. Then by (29.10), (29.11) and Lemma 30.1 we


have
1
p(O,1, n) = 1- - 1- exp(-n1/2)
D(n )
and

This means that the particle will return to the origin exp(nl/') times before
arriving n or -n. Hence to arrive n requires at least exp(n'/') steps.
Conversely, in n steps the particle cannot go farther than (logn)'.
This way of thinking is due to Sinai (1982). He was the first one who
realized that having high peaks and deep valleys in the environment, for the
particle it takes a long time to go through. Clearly high peak means that
T ( k ) is a big positive number for k > 0 resp. it is a big negative number
for k < 0 while the meaning of the deep valley is just the opposite.

30.3 A prediction of the Lord


LEMMA 30.3 For any environment E we have

PE{u(n) = k} = (30.5)

Iv(n) - nEol
lim sup = 1 as. (PE) (30.6)
n--tm (2nEo(l- Eo) loglogn)1/2
where u, is defined an Section 30.2.
A F T E R T H E S I X T H DAY 315

Proof is trivial.
LEMMA 30.4 For any environment & and k = 1 , 2 , .. . we have

P&{M+(p,)< k I vn} = (p(O,1,k ) ) y n = 30.7)

= 2 &)'
1=0
(1 - (Y)EA(l- Eo)n-z

&)+
= ( E o (1 - 1 -I&), = (1 - &) n
. (30.8)

Proof is trivial.
Now we prove our
THEOREM 30.2 For 0ny environment & we have
I(n(logn)-l+) 5 M+(P,) 5 I(n(logn)l+E) a s . ( P E ) , (30.9)
I(-n(logn)-l-E) 6 M-(p,) <_ I(-n(1ogn)lfE) a s . (PE), (30.10)

max{ I (n (log n )- I - € ) , I ( -n (log n)-l-")} 5 M (p n )


5 max{I(n(logn)l+E),I(-n(logn)'+')} a s . (PE) (30.11)
for all but finitely many n .

Proof. By Lemma 30.4 and (29.26) we have

{
P& M+(pn) 2 I
G-n(logn)l+E
)I
EoQn
= 1
-(log n)l+E
3
where
316 CHAPTER 30

Let nk = 2k. Then by the Borel-Cantelli lemma we get (cf. (29.16))

M+(P,,)
(:
< I -nk(lOgnk)'+€
1 8.s. (PE)

for all but finitely many k . If nk 5 n < nk+l we have

M+(Pn) 5 M+(P7Lk+l <


I- (i-nk+l(lognk+l)lfE
1 5 I (nk(lognk)'+E)
5 I (n(logn)l+€) 8,s. (PE)for all but finitely many n.
Hence we have the upper part of (30.9).
Now we turn to the proof of the other inequality of (30.9).
By Lemma 30.4 and (29.26) we have

PE{M+(~~) < I(n(logn)-l-E)) = (1 -


EO
D(I(n(1og n)-l-&))
< (1
- - Eo(logn)'+&)"
n
5 exp (- EO(log n )'+').
Hence we have (30.9) by the Borel-Cantelli lemma.
The proof of (30.10) is identical. (30.11) is a trivial consequence of
(30.9) and (30.10).
In order to get some estimates of M ( n ) (resp. M + ( n ) ,M-(n)) the Lord
is interested to estimate pn or equivalently ( ( 0 ,n). To study this problem
in a more general form we present a few results describing the behaviour of
the local time E(z,n).

LEMMA 30.5 For any integer k = 1 , 2 , . . . and any environment & we


have

Consequently
A F T E R T H E S I X T H DAY 317

Proof. Clearly we have

+
P & { t ( k ,PI) = 0 ) = 1 - Eo E o p ( O , l ,k )
= l - E , + E o 1-- EO
( Dtk)) =l-D(lC).

- EO(l-Ek)
- (1--");:L
D(k)D"(k)
A trivial calculation gives
LEMMA 30.6 (Csorgo-HorvAth-Rivksz, 1987). For any k = 1 , 2 , . . .

(30.12)

(30.13)

(30.14)

(30.15)

and
(30.16)
318 CHAPTER 30

Proof. As an example we prove (30.14). By Lemma 30.5 we have

Remark 1. (30.12) implies that: for any E >0

mk 2 -P exp((1- &)c(2kloglog1c)1/2) i.0. a.s. (PI)


1-P
and

mk 5-
P
'
- exp(-(l- E)c(21~loglogk)l/2) i.0. a.s. (PI).

Compare these inequalities and (9.6).


LEMMA 30.7 For any k = 1 , 2 , . . . and any environment & we have

(30.17)

(30.18)

Proof is trivial.
Now we give a somewhat deeper consequence of (3.14).
LEMMA 30.8 (CsorgB-Horvhth-Rivisz, 1987). For any

and any k = 1 , 2 , . . ., we have

where
(-)
5
l@kl 5 A D*(k) and A is a positive constant.
A F T E R T H E S I X T H DAY 319

Proof. By Taylor expansion we get

with (015 1,l q ( 51, where


D*( k )
D ( k ) = ___ and h(X) = A - -
A2
+ 0-A36 '
1 - Ek 2
Consequently

LAI~I~(w)~
if A is big enough. Hence by (30.14) we have

Multiplying the above inequality by

one gets the Lemma.


LEMMA 30.9 Let

T h e n for a n y k = 1 , 2 , . . . and n = 1 , 2 , . . . we have

P&{lE(k,Pn) -nmkl 2 x f i }
320 CHAPTER 30

Then we get

and we have the Lemma.


This last inequality gives a very sharp result for E(k, p n ) when k is not
too big. In cases where k can be very big it is worthwhile to give another
consequence of Lemma 30.6. In fact we prove
LEMMA 30.10 For any K > 0 there exists a C = C ( K )> 0 such that

P&{E(k,pn)2 2 n m k + ClognD*(k)) 5 nPK (30.19)

( k = 1 , 2 , . . . ; n = 1 , 2 , . . .).

P r o o f . Let = X I , = ( 1 - E k ) / 2 D * ( k ) . Then by (30.15) and (30.16) we


get

, 2 2nmk + co*(k)
p & { t ( k Pn) logn}
= P&{expAt(k,p,) 2 exp(2Xnmk ACD*(k)logn)} +
5 E&(expX[(k,pl))nexp(-2Xnmk - XCD*(k)logn)

( 2) n- 1 -Ek Eo o * ( k ) - - 1- E k
5 exp n- - -2
2 D * ( k )1 - E k D ( k ) 2D*(k)

C log n

which proves (30.19).


A very similar result is the following:
LEMMA 30.11 For any Cl > 2/p (cf. ( C . l ) ) we have

(30.20)

(k = 1 , 2 , . . . ; n = 1 , 2 , . . .).
A F T E R THE SIXTH DAY 321

Proof. Let X = XI, = (1 - B k ) / 2 D * ( k ) . Then by (30.15) and (30.16) we


get

Hence we have (30.20).


An analogue result describes the behaviour of c ( k , p l ) when Ic is a big
positive number.
L E M M A 30.12 There exist positive constants C and C1 such that

PE{J(kPl) 2 C1D*(k)logk I C ( k P 1 ) > 01 I (30.2 1)

and
PE(E(kP1) 5 k - 2 D * ( k ) I E ( k , p 1 ) > 0) i ck-2. (30.22)

Proof. Let p k be the number of negative excursions away from k between


0 and p1. Clearly, we have

Consequently

(30.23)
Hence
322 CHAPTER 30

with some constant C > 0.

Proof. Let

Then
P&{G= 1) = EO(1 - P(0,1, k)) = P
and by the Bernstein inequality (Theorem 2.3)

{ -t }-
PE S < n p < C e x p -- (3
Let 1 5 il < i 2 < . . . < is 5 n be the sequence of those i's for which
A F T E R T H E S I X T H DAY 323

and let
~j = t ( k , Pijt-1) - t(k,pij) ( j = 172,. . S),
i.e. uj is the number of excursions away from k between pij and pij+l.
Further, let uJF resp. u: be the number of the corresponding negative resp.
positive excursions. Then

uj = u3' +Uj,
P&{UF = m } = (1 - Q)m-lq, Q (1 - E k ) p ( O , k - 1,k),
and using again the Bernstein inequality we obtain

Hence

5 C e x p (-$) + P&{ u; + u; + . . . + u; 5
1
-nmk, s = -2ln p I
< Cexp
- (-$)+ C e x p (-z). 4

Since uj 2 uJ7, we have the Lemma.


Now we give an upper bound for pn.
THEOREM 30.3 For any E > 0 and for all but finitely m a n y n we have

Pn I2n
I ( n ( l 0 gn ) l + C )

k=-I(-n(1og
c n ) l + C )
m k +Clogn c
r (n(iog n)l+c)

k=-I( n(l0g n)l+S)


D*(k) U.S. (PE)

(30.24)
where C is a big enough positive constant.

Proof. By Theorem 30.2 we have


324 CHAPTER 30

Lemma 30.10 and the Borel-Cantelli lemma imply

Analogous inequality can be obtained for negative k's. Hence we have


(30.24).
A somewhat weaker but simpler upper bound of pn is given in the
following:
THEOREM 30.4 For a n y E >0 and for all but finitely m a n y n we have

pn 5 n(logn)2+E
k=-I(-n(log
c
I(n(l0g n ) l + E )

n)l+s)
ml, U.S. (PE).

Proof. By (30.12), (29.23) and (C.1) of Chapter 29 for any 0 5 k 5


I(n(logn)l+'), we have

Hence

k=O k=O

Since analogous inequality can be obtained for negative k's we have the
Theorem.
A lower bound for p n is the following:
THEOREM 30.5
n
pn 2 -rnaxmk
4 kEA
U.S. (PE) (30.25)

where A = An = { k : 0 < D ( k ) < nn/logn}, K < PI12 and P is defined


in (C.l) of Chapter 29.
A F T E R T H E SIXTH DAY 325

(30.25) follows by Lemma 30.13.

Remark 2. Remark 1 easily implies that

lim maxrnk = co 8,s. (Pl).


n+m & A

Hence (30.25) is much stronger than the trivial inequality p n 2 2n.


Clearly having the upper bound (30.11) of M(p,) and the lower bound
(30.25) of pn we can obtain an upper bound of M ( n ) . Similarly having the
lower bound (30.11) of M ( p n ) and the upper bound (30.24) of ,on a lower
bound of M ( n ) can be obtained. In fact we have

THEOREM 30.6 Let

f,'(n) = max{I(n(Iog n)'+€),I(-n(log n)l+')),


f; ( n )= max(I(n(1og n)-'-'), I(-n(log T I - ' - € ) } ,

k=-Z(-n(log ,)I+<)
n
h(n) = - max mk.
4 LEA

Then for all but finitely many n

(30.26)
where g-'(.) resp. h-'(.) are the inverse functions ofg(.) resp. h(.).

Proof. Theorems 30.2 and 30.4 combined imply

which, in turn, implies the lower inequality of (30.26). Similarly by Theo-


rems 30.2 and 30.5 we get

and we have the upper inequality of (30.26).

Remark 3. Note that knowing the environment E the lower and upper
bounds of (30.26) can be evaluated.
326 CHAPTER 30

30.4 A prediction of the physicist


Having Theorem 30.2 and Lemma 29.4 the physicist can say
1- E log2 n
-- 5 I(n(logn)-l-E) 5 M +(pn)
202 log, n
I I(n(logn)'+")5 (lognloglogn ...log,-, n(log, n)l+€)' (30.27)

and the analogue inequalities are true for M - ( p , ) and M ( p , ) . Theorem


30.2 and Lemma 29.4 also suggest that (30.27) and the corresponding in-
equalities for M-(p,) and M(p,) are the best possible ones. It is really so.
In fact we have
THEOREM 30.7 (Deheuvels-R6vbsz, 1986). For any E > 0 and p =
1,2,. . , we have

M+(pn) 5 ( l o g n l o g l o g n ~*log,-,
~ n(log, n ) 1 + E ) 2
a.s.(P) if n is big enough, (30.28)
M+(pn) 2 (lognloglogn...log,_l nlog,n), i.0. a.s. (P), (30.29)
1+ log2 n
M+(Pn) I s G i.0. a s . (P),
E
(30.30)

1 - E log2 n
a s . (P) if n is big enough. (30.31)

T h e same inequalities hold for M-(p,) and M(p,).

Proof. (30.27) gives the proofs of (30.28) and (30.31). Since by Theorem
30.2 for all but finitely many n

M+(p,) 2 I(n(logn)-'-') as. (PE)

and by (29.20)
I(n(l0g n)-l-E) 1 ((log n - (1+ 2E) log, n )log, n . . . log,+, n ) ,
2 (log n log, n - .. log, n)' i.0. a s . (PI)

we have (30.29). Similarly, by Theorem 30.2


A F T E R T H E S I X T H DAY 327

hence we get (30.30). Clearly, the physicist is more interested in the be-
haviour of M + ( n) , M-(n ), M ( n ) than those of M+(p,), M-(p,), M(p,).
Since pn 3 2n by (30.28) and (30.30) we have

THEOREM 30.8 (Deheuvels-Rkvksz, 1986). For a n y E > 0 and p =


1 , 2 , . . . w e have

M+(n) 5 (log n log log n . log,-, n(Iog, n)1 + & ) 2


as. (P) if n is big enough, (30.32)

and
(30.33)

T h e s a m e inequalities hold for M-(n) and M ( n ) .

To get a lower bound for M+(n ), M-(n) and M(n) is not so easy. However,
as a consequence of Theorem 30.6 we prove

THEOREM 30.9 (Deheuvels-RkvBsz, 1986). For a n y E > 0 we have

log2 n
U.S. (P) (30.34)
M+(n) 2 (log log n)2+E
E > 0 a n d for all but finitely m a n y n . T h e s a m e inequality holds
for a n y
for M-(n) a n d M ( n ).

Proof. Let
I(n(l0g ,)l+S)

g+(n>= n(logn)2+E C mk.


k=O

Then by Condition (C.l), (30.12), (29.19) and (29.6)

5 -n(logn)2+E(logn)2(log2
1-P n)2+2E
P
x e x p ( ( l + 2~)(r(2(1ogn)~(log,
n)2+3E)1/2
5 exp(logn(loglogn)1+2E). (30.35)
328 CHAPTER 30

It can be shown similarly that for any E >0


g(n) 5 exp(logn(Ioglogn)'+") a.s. (PI) (30.36)
for all but finitely many n. Consequently

a.s. (PI) (30.37)

if n is big enough. Hence by (29.22)

1(g-l(n) (log g-l(n))-l-E) 2 (10gn)2 a.s. ( P I ) . (30.38)


(log log n ) 2 + E
(30.26) and (30.38) combined imply the Theorem.
A much stronger theorem is proved by Hu and Shi (1998/B).
THEOREM 30.10 Assume conditions (C.l), (C.2), (C.3) and let {a,}
be a sequence of positive nondecreasing numbers. Then we have
P{R, 2 an(Iogn)2 i.o.} = 1 (P)
if and only if
x&
03

n=2
r2a2
exp ( - F a n ) = 00,

and

if and only if
00 1f2
an
n=2
and

if and only if
1
= 00.
n=2 nakf log n

Remark 4. In the above theorem conditions (C.l), (C.2), (C.3) can be


replaced by the weaker condition:

(30.39)

where 0 < Ci < 00 (i = l , 2 , 3 ) .


Chapter 31

What Can a Physicist Say About


the Local Time [(O,n)?

31.1 Two further lemmas on the environment


In this section we study a few further properties of E . These results are
simple consequences of the corresponding results of Part I.
LEMMA 31.1 For any 0 < E < 1 and 0 < S < ,512 there exists a random
sequence of integers 0 < n1 = nl(wl;E,S) < 722 = n z ( w l ; ~ , d<
) . . . such
that
Tnk 5 -(1 - &)gb;: and max Tj 5 n:’2(lognk)-6 (31.1)
o<j<nk

where 6, = b(n) = (2nl0glogn)-~/’. Consequently by (29.5) and (29.18)

D ( n k ) 5 exp(n:’2(logn,+)-6) and
o*(nk) 2 exp((1 - E)ab;li). (31.2)

Proof. (31.1) follows from (5.11) and Invariance Principle 2 of Section 6.4.
LEMMA 31.2 There exist two constants C1 > 0 , C, > 0 and a random
sequence 0 < n1 = n l ( w 1 ) < nz = nz(w1) < . . . such that
T,, 2 C2bif and

(31.3)

Consequently by (29.5) and (29.18)


o(72k) 2 exp (~zblf) a n d

(31.4)

Proof. (31.3) is a simple consequence of Theorem 10.5 and Invariance


Principle 2.

329
330 CHAPTER 31

31.2 On the local time t(O,n)


Since ( ( 0 ,pn) = n Theorem 30.4 and (30.36) imply
THEOREM 31.1 For any E > 0 we have
n = ((0, P n ) 5 ((0, exP(logn(log logn)l+E))
i. e.
(31.5)

for all but finitely many N .


Now we prove that (31.5) is nearly the best possible result. In fact we have
THEOREM 31.2 For any E > 0 we have

i.0. a.s. (31.6)

Proof. Define the random sequence { N k } as follows: let Nk be the largest


integer for which
I(Nk(logNk)-(l+E)) 1 5 n k +
where nk is the random sequence of Lemma 31.1. Then by (30.9)

> I(Nk(1og Nk)-'-')


M + ( p N k ) 2 I(NI,(logNk)"l+E'z)) - + 2 2 n k a.s. (P)
for all but finitely many Ic, i.e. < ( n k , p l v k ) > 0. That is t o say, there exists a
0 < j = j ( k ) < Nk suchthat((nk,(pj,Pj+i)) = < ( n k , P j + i ) - < ( n k , P j ) > 0.
Hence by (30.22)

PE{<(nk>(Pj,Pj+l)) 5 q 2 W n r c ) }I CnF2,
and by (31.2) and the Borel-Cantelli lemma

( p j , ~ j + l )2
<(m, 2 )exp((1-
) ni2~*(nk E)C~;;) 8.s. (P)
for all but finitely many Ic (where j = j(Ic)). Consequently

P N ~_> P~+-
I pj L E(m,( p j , p j + z ) ) 2 exp((1- E)C~;:) a.s. (P) (31.7)
for all k big enough. By (29.23) and (31.2)

Nk (log Nk)-(l+€)
5~ + + ~I) ~) ( n k 5
( ~ ( ~ k ( l o g ~ k ) - ( ~ 1) ) exp(nk/'(lognk)-')
WHAT CAN A PHYSICIST SAY ABOUT c(0,n ) ? 33 1

i.e.
1
nk 2 ,(logNk)2(10g10gNk)26. (31.8)
(31.7) and (31.8) combined imply for any 6* < 1 and for all but finitely
many lc
pNk 2 exp ( o ~ (P).
l o ~ ~ ~ ( ~ o ~ ~a.s. ~ k ) 6 * )(31.9)
(31.9) in turn implies Theorem 31.2.
Theorems 31.1 and 31.2 have shown how small [(O, N ) can be. Essen-
tially we found that ((0, N ) can be as small as N1/loglogN.In the next two
theorems we investigate the question of how big ((0, N ) can be. In fact we
prove
THEOREM 31.3 There exasts a C = C ( p ) > 0 such that

c(0, N ) 2 exp ((1 - -)log3C N log N ) i.0. a s . ( P )

where ,8 is defined in condition (C.1)-

Proof. By (30.12) we have

(31.10)

Hence by Lemma 30.10 for any K > 0 there exists a C = C ( K ) > 0 such
that

) C n D * ( j ) }5 n-K
P&{[(j,pn2 ( j = 1 , 2 . .. , n = 1 , 2 , .. .). (31.11)

Define the random sequence { N k } as follows: let Nk be the smallest positive


integer for which
I(Nk(l0gNk)'") 2 n k (31.12)
where { n k } is the random sequence of Lemma 31.2. Observe that by (31.4)
and (29.23) for all but finitely many k

exp (Czb;;) 5 D ( n k ) <_ O(I(Nk(1ogNk)l+€))5 Nk(logNk)l+€a s . (PI).


Hence
(31.13)

and by (30.9) for all but finitely many k


332 CHAPTER 31

Consequently
[ ( j ,P N , ) = O if j > 7th. (31.14)
By (31.10), Lemma 30.10 and (31.13) for any K >0 there exists a C =
C ( K ) > 0 such that

(31.15)
Hence by the Borel-Cantelli lemma, (31.13), (31.14), (31.15) and (31.4) we
get

for all but finitely many k . Since similar inequality can be obtained for the
sum cj"=, <(-j1 P N k ) and for PNk = c,"=-,[ ( j , P N k ) we have

if k is big enough, which implies Theorem 31.3.


Looking through the above proofs of Theorems 31.2 and 31.3 one can
realize that somewhat stronger results were proved than stated. In fact we
have proved

THEOREM 31.4 For almost all environment E and for all E >0 and C
big enough there exist two random sequences of positive integers

n1 = n 1 ( & , ~<)n 2 = n 2 ( & , ~...


) and
ml = ml(&,C) < m2 = mz(&,C ) < . . .
W H A T C A N A P H Y S I C I S T S A Y A B O U T E(0, n)? 333

and

Remark 1. Theorems 31.1 - 31.3 are, as we call them, theorems of the


physicist. However, Theorem 31.4 can be considered as a theorem of the
Lord. Knowing the environment & the Lord can find the time-points where
((0, .) will be very big or very small while the physicist can only say that
there are infinitely many points where ((0, .) takes very big resp. very small
values but he does not know the location of these points.
In the last theorem of this chapter we prove that ((0, n) cannot be very
close to n, i.e. Theorem 31.4 is not far from the best possible one.
THEOREM 31.5 For any C > 0 we have

((0, n) Iexp((1 - 0,) logn) a.s. (P)

for all but finitely many n where

Proof. Introduce the following notations:

N = N ( n ,&) = [(l0gn)2(log2n)-(2f')],
M'(Pj,Pj+i)=p,<y2x, Rk ( j = 1 , 2 ,. . .),
3- -P3+1

$ * ( N )= max{n : o 5 n 5 N , T ( n ) 5 -obi1},
< * ( s , n )= # { j : 1 i j 5 <(O,n),M + ( P j - I , p j ) 2 .I.
Note that by Theorem 5.8 (especially Example 3) and by Invariance
Principle 2 we obtain

max Tk 5 E(~($*(N)))-' a.s. (P) (31.16)


Olkli'(N)

for any E > 0 and for all but finitely many n. Hence by Lemma 30.1, (29.5)
and (31.16)

Consequently if C is big enough then for any n = 1 , 2 , .. . we have


334 CHAPTER 31

and by the Borel-Cantelli lemma

(31.17)

for all but finitely many n. Applying Lemma 30.1 and the definition of
$ * ( N ) we obtain

(31.18)

(31.19)

Applying Theorem 5.3 we obtain

+ * ( N )2 N exp (-214 a s . (P) (31.20)


(31.20)
(log log N ) 1 / 2

for all but finitely many N . (31.19) and (31.20) imply that with some
c>o

for all but finitely many n where

Hence we have Theorem 31.5.

Remark 2. Since
8, << log, n)-l
there is an essential gap between the statements of Theorems 31.3 and 31.5.
This is filled in by Hu and Shi (1998/A).
WHAT CAN A PHYSIClST SAY ABOUT ((O,n)? 335

THEOREM 31.6 A s s u m e condition (30.39) and let {a,} be a nonde-


creasing sequence of positive numbers. T h e n f o r any x E Z1fixed, we have

if and only if
CQ
1
C
n=3
n a i log n
= co,

and

if and only if
1
= 03.
n=3
nbn log n

They also proved


THEOREM 31.7 A s s u m e condition (30.39). T h e n f o r any x E Z1 fixed,

where Ul and 172 are independent T.V. 's uniformly distributed in ( 0 , l ) .


This page intentionally left blank
Chapter 32

On the Favourite Value of the RWIRE


In this chapter we investigate the properties of the sequence [(n) = maxk [(k,n)
A trivial result can be obtained as a
Consequence of (30.32). For any E > 0 we have
(logn)2(loglogn)2+E
lim [ ( n )= M a.s. (P). (32.1)
n+m n
We also get
Consequence of (30.33).

(32.2)

It looks obvious that much stronger results than those of (32.1) and (32.2)
should exist. In fact we prove in the next theorem that (under some extra
condition on L )
limsup > o a s . (P).
n+m 72

THEOREM 32.1 (Rkvksz, 1988). Assume that


1
P I {Ei = p } = P I {Ez = 1 - p } = - (0 < p < l / 2 ) . (32.3)
2
Then there exists a constant g = g ( p ) > 0 such that

[(n) 2 g ( p ) a s .
limsup - (P).
n+w n

Remark 1. Very likely Theorem 32.1 remains true replacing condition


(32.3) by the usual conditions (C.l), (C.2), (C.3). Note that (32.3) implies
(C.l), (C.2) and (C.3).
In order t o prove this theorem at first we introduce a few notations.
Let N be a positive integer and define the random variables on 01:

337
338 CHAPTER 32

p r ( N ) = min{k : k> 0, Tk = AN},


p,(N) = min{k : k > 0, T-I, = - A N } ,
p&A = - min{Tk : o 5 IC 5 P;(N)},
p L , A = max{T-k : 0 5 k 5 p;(N)},
P N A= max{p&A, p;A},

where
A = log -.1 - P
P
For the sake of simplicity from now on we assume that p& 2 p i .
Continue the notations as follows:

and on 0:

FN = min{k : k > 0, Rk = Q N } ,
GN = min{k : k > 0, RI,= p I ( N ) } ,
H N = min{k : k > F N , RI,= r; or r;} - F N .
For the sake of simplicity from now on we assume that r; 5 0.
The above notations can be seen in Figure, where instead of the process
T k the process
if k 0, >
{ Tk
" = -TI, if k 5 0
is shown.
ON T H E FAVOURITE V A L U E OF T H E RWIRE 339

Figure

Now we present a few simple lemmas.


LEMMA 32.1 There exists a n absolute constant 8 (0 < 8 < 1) such that

pi{L~(j5
) Sj2 + 4 j + 2 ( j = 0 , 1 , .. . , N - 1),A} 28

Proof. Consequence 1 of Section 12.4 easily implies that

P ~ { L N (5
~ )S j 2 + 4 j + 2, (j =O,l,. . . ,N - 1)) (32.4)
is larger than an absolute positive constant independent from N . It is easy
to see that
Pi{A} = Pi
{
N
F'+(N)5 T , F - ( N ) 5
"1
-
2
is larger than an absolute positive constant independent from N (cf. Con-
(32.5)

sequence 4 of Section 10.2) and the events involved in (32.4) and (32.5) are
asymptotically independent as N -+ IXI. Hence we have Lemma 32.1.
340 CHAPTER 32

Let N = Nk(&)be a sequence of positive integers for which


L N ( ~5)6j2 + 4j + 2 ( j = 0 , 1 , . . . , N - l ) , and A holds
(by Lemma 32.1 for almost all E there exists such an infinite sequence).
LEMMA 32.2 For almost all & and for a n y E > 0 we have
(32.6)

(32.7)

(32.8)

( N = Nk) a.s. (PE)for all but finitely m a n y k a n d


FN = o (G N ) a . ~ . ( P E ) . (32.9)

Proof. By Lemma 30.1, (29.5) and the definition of N = Nk we have

2 -exp
EO (-:)
ON
and

(cf. the notations in Section 30.2). Hence by the Borel-Cantelli lemma


we easily obtain (32.6). The above two inequalities also imply that more
excursions are required to arrive at p,(N) than a t Q N . Hence we get (32.9).
The first inequality of (32.7) and (32.8) can be obtained similarly. In order
to prove the second inequality of (32.7) observe that by (32.8) and (32.9)

5 ( Q N - I - p r ( N ) )exp (1+ E ) -
( 3 a.s. (PE)
ON THE FAVOURITE VALUE OF THE RWIRE 34 1

for any E > 0 and for all but finitely many k. Hence the lemma is proved.
Introduce the following further notations:

/jl =f i l ( a ~= ) min{n : n > 0, R F ~ =+Q ~N } ,


= / j z ( Q N ) = min{n
lj2 : n > / j 1 , R F ~ =+Q ~
N } , ...
t ( j , / j n , N )= t ( j , b n ) = < ( j , F N +bn) - < ( j , F N ) ,

= Uj+llJj+2 - .. U,N-l = exp(TaN-l - Tj).

Clearly Lemmas 30.11 and 30.10 can be reformulated as follows:


LEMMA 32.3 For any j < on we have

(32.10)

where C1 > 2p-l. Further, for any K > 0 there exists a C = C ( K ) > 0
such that

(32.11)

Proof of Theorem 32.1. In order to simplify the notations from now on


we assume that TN > 0. The case TN 5 0 can be treated similarly.
+
Let 112 E < $QA< $9 < 1 - E with some E > 0 and introduce the
following notations:

n = [exp(&N)] where N = Nk = N k ( E )

and
x(j) = min{k : TN < Ic < Q N , +
Tk = - ~ N Aj A } .
342 CHAPTER 32

Consider any integer 1 E (x($lN),( Y N ) . Then

a.s. (PE)for all but finitely many lc and by (32.10)

The last inequality of the above inequalities follows from the fact that
(YN < a. Consequently by (32.9) and Lemma 32.1

03

< Cln C(41+


- 2)2 exp(-la) = f ( A ) n a s . (P) (32.12)
1=0

for all but finitely many k.


Let 1 E ( T ~ , x ( $ ~ N ) Then
). by (32.11)

Consequently

5 2 1- -mP~
P
exp(-$lAN) + C(1ogn)aN exp ):( = o(n). (32.13)

(32.12) and (32.13) combined imply

c
(YN-1

1=r,
i G n ) I: 2 f ( A ) n a.s. (P) (32.14)
ON THE FAVOURITE VALUE OF THE RWIRE 343

for all but finitely many k. Similarly one can see that
+
TN

Hence by (32.7)

-
TN

Let m = 4f(A)n. Then applying again (32.7) we get


m
<((I + E)m) 2 <(FN -k m ) 2 <(FN + fin) 2 t(arV,FN f fin) = n = -
4f (A)
which proves the Theorem.
Note that we have proved a stronger result than Theorem 32.1. In fact
we have
THEOREM 32.2 For almost all environment there exists a sequence o j
positive integers n1 = n l ( & )< 722 = na(&)< . . . such that

provided that the condition of Theorem 32.1 is fulfilled.

Remark 2. On the connection of Theorems 32.1 and 32.2 the message of


Remark 1 of Section 31.2 can be repeated here as well.
Another simple consequence of the proof of Theorem 32.1 is
THEOREM 32.3 Assume that the condition of Theorem 32.1 is fulfilled.
Then there exists an E = ~ ( p>) 0 such that

On the liminf behaviour of ( ( n )we present only a

Conjecture 1.
lim inf log log n = o a.s. P
n+03 n
and
liminf -(10giogn)~
C(n) = 03 a.s. P.
n+03 n
344 CHAPTER 32

Conjecture 2. For any E > 0 there exists a K = K ( E )> 0 such that


Chapter 33

A Few Further Problems

33.1 Two theorems of Golosov


-
Theorems 30.8 and 30.9 claim that M ( n ) (logn)2. As we have already
mentioned, this fact was observed first by Sinai (1982). The result of Sinai
suggested t o Golosov t o investigate the limit distributions of the sequences

and

(for the definition of 0 2 ,cf. (C.3) of Chapter 29). In order to study the
limit distributions of RA and M; he modified a bit the original model. In
fact he assumed that Eo = 1, i.e. the random walk is concentrated on
the positive half-line. Having this modified model he proved that the limit
distributions of the sequences {Ri} and { M ; } existed and he evaluated
those. In fact we have

THEOREM 33.1 (Golosov, 1983). For any u > 0


, PU

(33.1)

and
(33.2)

345
346 C H A P T E R 33

Considering the original model Kesten (1986) proved


THEOREM 33.2

where

Sinai (1982) also proved that there exists a sequence of random variables
on R1 such that R, - a , = o((1og n)2) in probability ( P ) .
a1, a 2 , . . . defined
This means that knowing the environment E we can evaluate the sequence
{a,} and having the sequence {a,} we can get a much better estimate of
the location of the particle R, than that of Theorem 30.6. Golosov proved
a much stronger theorem. His result claims that R, - a, has a limit
distribution (without any normalising factor), which means that knowing
a, the location of the particle can be predicted with a finite error term
with a big probability. The model used by Golosov is a little bit different
from the one discussed up to now. He considered a random walk on the
right half-line only arid he assumed that the particle can stay where it is
located. His model can be formulated as follows.
Let E = {p-l(n),po(n),pl(n),}( n = 0 , 1 , 2 , . . .) be a sequence of in-
dependent random three-dimensional vectors whose components are non-
negative and p-1(0) = 0, p - l ( n ) + p o ( n ) + p l ( n ) = 1 ( n = 0 , 1 , 2 ...).
Assume further that
(i) (p-1 ( n ) , p l( n ) )( n = 1 , 2 . . .) are identically distributed,
(ii) p o ( n ) ( n = 0 , 1 , 2 , . . .) are identically distributed,
(iii) the sequences {po(n), n = 0 , 1 , 2 , . . .} and
{p-1 (n)/pl ( n ) ,n = 1 , 2 , . . .} are independent,

(iv) Elog(p-l(n)/pl(n)) = 0 and 0 < E(log(p-l(n)/pl(n)))2 = 0' < 00,


(v) E(l -po(n))-l < 00 and P(po(n)> 0) > 0.
Having the environment E we define the random walk {R,} by Ro = 0
and
p l ( i ) if B = -1,
P&{Rn+I= i + 6' I Rn = i,Rn-l,.. . ,Ro} = po(i) if B = 0,
pl(i) if B = 1.
Then we have
A F E W F U R T H E R PROBLEMS 347

THEOREM 33.3 (Golosov, 1984). There exists a random sequence {a,}


defined on R1 such that for any -KI < y < oc)
lim P{R,-
12-00

where the exact form of the distribution function F ( y ) is unknown.

Remark 1. Clearly Theorems 33.1 and 33.2 can be considered as theorems


of the physicist. However, Theorem 33.3 is a theorem of a mixed type. The
physicist knows about the existence of a, but he cannot evaluate it. The
Lord can evaluate a, but He cannot use His further information on E. In
fact, He would like t o evaluate the distribution P&{R, - a , 5 y}. It is not
clear at all whether the limn+m P&{R, - an 5 y} exists for any given &.
Remark 2. Theorems 32.1 and 32.2 also suggest that R, should be close
t o a,.

33.2 Non-nearest-neighbour random walk


The model studied in the previous chapters of this Part is a nearest-
neighbour model, i.e. the particle moves in one step to one of its neighbours.
In the last model of Golosov the particle keeps its place or moves to one
of its neighbours. In a non-nearest-neighbour model the particle can move
farther. Such a model can be formulated as follows.
Let & = {p-l(n),pl(n),pz(n)} (n = 0, k l , 412,. . .) be a sequence of in-
dependent, identically distributed three-dimensional random vectors whose
components are non-negative andp-l(n)+pl(n)+pz(n) = 1 ( n = 0 , f l , . . .).
Then we define a random walk {R,} by Ro = 0 and
p-l(i) if 0 = -1,
P&{R,+l = i + 8 I R, = i , R n W l ,. .. , R o }=
{ pl(i)
p2(i)
Studying the properties of {R,} is much, much harder than in the nearest-
if
if
8 = 1,
8 = 2.

neighbour case. Even the question of the recurrence is very hard. In fact,
the question is t o find the necessary and sufficient condition for the distri-
bution function
Pl{P-l(i) < U - l , P l ( i ) < Ul,PZ(i) < .2} = F ( U - l , U l , U Z )
which guarantees that
P{R, = 0 i.0.) = 1. (33.3)
This question was studied in a more general form by Key (1984), who in
the above formulated case obtained the required condition.
348 CHAPTER 33

and

Then
P{Rn = 0 i.o.} = 1 if m = 0,
P{limRn=oo}=l if m > 0 ,
n+cc
P{ lim Rn = - m } = 1 if m < 0.
n4w

Remark 1. Clearly this Theorem gives the necessary and sufficient condi-
tion of (33.3) if the expectation m exists. If m does not exist then the nec-
essary and sufficient condition is unknown, just as in the nearest-neighbour
case.

Remark 2. The general non-nearest-neighbour case (i.e. when the envi-


ronment & is defined by an i.i.d. sequence

{P-L(n),P-L+l(n),. . . , P R ( n ) l

where L and R are positive integers) was also investigated by Key. However,
he cannot obtain an explicit condition for (33.3), but he proves a general
zero-one law which implies that

P{R, = 0 Lo.} = 0 or 1.

His zero-one law was generalized by Andjel (1988).

33.3 RWIRE in Zd
The model of the RWIRE can be trivially extended to the multivariate
case. For the sake of simplicity here, we formulate the model in the case
d = 2. Let Uij = (Uij(1),Uij
( 2 ) ,Uij (4) ) ( i , j = O , f l , f 2 , . . .) be an array
( 3 ) ,Uij

of i.i.d.r.v.'s with Uij(k) 2 0, Uz'j')+ U!;) + Uj;' + u,'j") = 1. The array & =
{Uij,i , j = 0, f l , f 2 , . . .} is called a two-dimensional random environment.
Having an environment I a random walk {Rn, n = 0 , 1 , 2 , . . .} can be
defined by Ro = 0 and
P{Rn+l = ( i + l , j ) I R n = ( i , j ) , R , - 1 , . . . , R o } = U .(1)
.
Y
P{Rn+1 = ( i , j + I ) I R n = ( i , j ) , R n - l , ..., Ro}=Uij(2) 1
A F E W F U R T H E R PROBLEMS 349

No non-trivial, sufficient condition is known for the recurrence


P{R, = 0 i.0.) = 1
in the case d 2 2. Kalikow (1981) gave necessary conditions. In fact, he
gave a class of environments where P{R, = 0 i.o.} = 0. As a consequence
of his result he proves
THEOREM 33.5 Define the environment E by

Assume that

Then
P{R, = 0 i.0.} = 0,
moreover

where R:) is the first coordinate of R,.


Kalikow also proves a zero-one law, i.e. he can prove under some reg-
ularity conditions that P{B, = 0 i.o.} = 0 or 1. This zero-one law was
extended by Andjel (1988).
Kalikow also formulated some unsolved problems. Here we quote two
of them.
Problem 1. Is every three-dimensional RWIRE transient?

Problem 2. Let 0 < p < 1/2 and define the random environment E by

Is this RWIRE recurrent?


350 CHAPTER 33

33.4 Non-independent environments


In Chapter 28 we mentioned the magnetic fields as possible applications of
the RWIRE. However, up to now it was assumed that the environment &
consists of i.i.d.r.v.’s. Clearly the condition of independence does not meet
with the properties of the magnetic fields and most of the possible physical
applications. In most cases it can be assumed that the environment is a
stationary field. A lot of papers are devoted to studying the properties of
the RWIRE in case of a stationary environment &.
In the multivariate case it turns out that having some natural conditions
on the stationary environment & (which exclude the case of independent
environments) one can prove the recurrence and a central limit theorem
with a normalizing factor (log n)’.

33.5 Random walk in random scenery


Let C = {Ci = [(i), i = O , f l , f 2 , . . .} be a sequence of i.i.d.r.v.’s with
ECi = 0, E<: = 1, E(exptCi) < 00
for some It1 < t o ( t o > 0). C is called random scenery. Further, let { s k }
(independent from {&}) be a simple symmetric random walk. Kesten and
Spitzer (1979) were interested in the sum
n

k=O

If the particle has to pay <i guilders whenever it visits i, then the amount
paid by the particle during the first n steps of the random walk is n3I4Kn.
Clearly
+00

K , = n-3/4 C Ck<(k,n)
k=-m
where <(., .) is the local time of {sk}.
Studying the sequence { K n }Kesten and Spitzer are arguing heuristi-
cally as follows: let c;==, Ck = L, ( n = 0, f l , 3 2 , .. .) then one can define
independent Wiener processes {Wl(n) (n = 0,1,. . .)} and {Wz(n) ( n =
0, *l,. . .)} such that Wz(n)should be near enough to L, and simultane-
ously J ( k , n ) should be near to the local time ~1 (k,n ) of the Wiener process
WI(.). Hence
00
A F E W FURTHER PROBLEMS 351

n-3/4 s_, +oo


771 (2, n)dWz(x). (33.4)

Since it is not very hard to prove that n-’l4 s_’,” 71(x,n)dWz(x) has a
limit distribution, the above heuristic approach suggests that Kn has a
limit distribution.
Applying Invariance Principle 2. (Section 6.2) and Theorem 10.1 it is
not hard to get a precise form of (33.4).
We note that Kesten and Spitzer investigated a much more general situ-
ation than the above one and they initiated an extended research of random
sceneries. They conjectured that in the case when { s k } is a simple random
walk on the plane then n3/4Kn can be approximated by a Wiener process.
This conjecture was proved by Bolthausen (1989) (see also Borodin, 1980).
The strongest form of this statement is

THEOREM 33.6 (CsAki-Ri!!vesz-Shi, 2001) Let d = 2 and assume that


ElCiIQ < 00 for some q > 2. Possibly in an enlarged probability space,
there exists a version of Kn and a standard one-dimensional Wiener process
{ W ( t ) ,t 2 0) such that for any E > 0 as n + 00

I k=O I
A similar result can be obtained in case d 2 3.
THEOREM 33.7 (Ri!!vi!!sz-Shi, 2000) Let d 2 3 and assume that El&/4‘<
00 for some q > 2. Possibly in an enlarged probability space there exists a
Wiener process { W ( t ) ,t 2 0) such that for any E > 0

where

Consequence.

Here we present only the


352 CHAPTER 33

Proof of (33.5). Let


A(i,n) = {X E Z d : ,
[(X, n ) = i} = { X i ( i ,n ) ,X z ( i , n ) ,. . . X Q ( ~ , (i,
~ ) n)}

where Q(i,n)is the number of sites visited exactly i times up to n. Then

It is easy t o see (cf. Theorem 22.1) that


00

i=tn+l xEA(i,n)

Assume that i 5 tn and Q ( i , n )2 ny2(1- y)Z-'. Then

zEA ( i , n ) 1=1

It is easy to see that

and

The case Q ( i ,n) < ny2(1 - can be treated similarly. Since the r.v.'s
ny2(1--y)'-'

k 1

are independent we have (33.5).


A F E W F U R T H E R PROBLEMS 353

33.6 Random environment and


random scenery
It is a natural idea to introduce a common generalization of the random
environment and of the random scenery.
Let R, be a RWIRE (Section 33.3) and let c(z) ( x E Zd) be an array
of i.i.d.r.v.’s. Then we are interested in the properties of the sequence
C:==, [(Rk). Here we present only a
Conjecture. Assume that there exists an a > 0 such that P(<(x) 2 a } >
0. Then
2
limsup k=o
<(Rk)
= 1 as.
n+Co an

33.7 Reinforced random walk


Construct a random environment on Z1 by the following procedure. Let
& = 0, P(R1 = 1) = P(R1 = -1} = 1/2 and let the weight of each
interval (i, i + 1) (i = 0, f l ,&2,. . .) be initially 1 and increased by 1 at
each time when the process jumps across it, so that its weight at time n
is one plus the number of indices Ic 5 n such that (Rk,Rk+l) is either
+
(i,i 1) or ( i + 1,i). Given {Ro = 0, R1 = i l , . . . , R, = in} Rn+l is
+
either in 1 or in - 1 with probabilities proportional to the weights at time
+
n of (in,in 1) and (in - 1,in) where il, i2,. . . ,in is a sequence of integers
with \ij+l - ijJ = 1 (j = 1 , 2 , . . . , n). Hence if R1 = I, the weight of [0, I]
at time n = 1 is 2. Consequently
1 2
P(R2 = 2 1 R1 = 1) = - and P{R, = 0 1 R1 = 1) = -.
3 3
Similarly
1 2
P{R2 = -2 I R1 = -1} = - and P{R2 = 0 I R1 = -1) = -.
3 3
Further, in the case R1 = 1, R2 = 2 the weights of [0,1] and [l,21 at time
n = 2 are equal to 2. Hence
1
P{R3 = 3 1 R1 = 1, R2 12) = 1 -P(R3 = 11 R1 = 1, Ra = 2 } = -.
3
In the case R1 = 1, R2 = 0 the weight of [0,1] at time n = 2 is equal to 3.
Hence
3
P{R3 = 11 Rl = 1, R2 = 0) = 1 - P ( R 3 = -1 1 R1 = 1, R2 = 0 } = -.
4
354 CHAPTER 33

Similarly

and
3
P{R3 = -1 I R1 = -1, RS = 0) = 1 -P{ R3 = 1 I R1 = -1, R2 = 0) = -.
4
This model was introduced by Coppersmith and Diaconis (cf. Davis
(1989)) and studied by Davis (1989, 1990).
Intuitively it is clear that the random walk generated by this model is
“more recurrent’’ than the simple symmetric random walk. However, to
prove that it is recurrent is not easy at all. This was done by Davis (1989,
1990), who studied the recurrence in more general models as well.
Note that in this model the random environment is changing in time
and depends on the random walk itself. Situations where the random envi-
ronment is changing in time look very natural in different practical models.
A more general model is the following: the random walk {Ri)starts
+
from the origin of Z1 and at time i 1 it jumps to one of the two neigh-
bowing sites of Ri, so that the probability of jumping along a bond of the
lattice is proportional to

w(number of previous jumps along that bond)

where w is a weight function. More formally, for a nearest neighbour walk


di)= (20,z1,. . . ,xi)let

r(z(2))= # { j : o 5 j 5 i, zi
zj+l) = (xi,
(zj, + 1) or (xi+ 1,zi)),
1(z(2)) =#{j : o 5 j 5 i, (zj,zj+l)= (zi,zi - 1) or (xi- 1,zi))

that is T (resp. I ) shows how many times had the walk visited the edge
adjacent from the right (resp. left) to the terminal site zi. Then the
random walk { Ri} is governed by the law

P { R i + l = R i + l I R j = z j , j = O , l , 2 ,...,i}
-
w (?-(di) ))
W(T(.(i)))+ w(I(z(i)))
= 1 - P{Ri+l = Ri - 1 I Rj = zj, j = 0,1,2,. . . ,i}.

Clearly the properties of the random walk { Ri} depend strongly on the
choice of the weight function w ( . ) . The most complete description of this
A FEW FURTHER PROBLEMS 355

type of walks is due to Tbth (1995, 1996, 1997, 1999). He studied for
example the cases when

(w(n))-l = 2-"(a + l)nn + 21-nBna-1 4-O(n-2) ( a E I[$, B E R)

and
w ( n ) = e-gn (g > 0).
For example in the first case with a 2 0 he proved that

exists and its value is evaluated.


This page intentionally left blank
References
ADELMAN, 0. - BURDZY, K. - PEMANTLE, R.
(1998) Sets avoided by Brownian motion. The Annals of Probability 26,
429-464.
ANDJEL, E. D.
(1988) A zero or one law for one-dimensional random walks in random
environments. The Annals of Probability 16,722-729.

AUER, P.
(1989) Some hitting probabilities of random walks on 2’. Coll. Math.
SOC. J. Bolyai: Limit Thorems i n Probability and Statistics (ed. I.
Berkes, E. CsBki, P. RCvCsz) North-Holland, 57, 9-25.
(1990) The circle homogeneously covered by random walk on 2’.Statis-
tics & Probability Letters 9, 403-407.

AUER, P. - REVESZ, P.
(1990) On the relative frequency of points visited by random walk on
Z 2 . Coll. Math. Soc. J . Bolyai: Limit Theorems in Probability and
Statistics (ed. I. Berkes, E. Csaki, P. RCvCsz) North-Holland, 5 7 , 27-33.

BARTFAI, P.
(1966) Die Bestimmung der zu einem wiederkehrenden Prozess gehoren-
den Verteilungsfunktion aus den mit Fehlern behafteten Daten einer
Einzigen Realisation. Studia Sci. Math. Hung. 1, 161-168.

BASS, R. F. - GRIFFIN, P. S.
(1985) The most visited site of Brownian motion and simple random
walk. 2. Wahrscheinlichkeitstheorie verw. Gebiete 70,417-436.

BASS, R. F. - KUMAGAI, T.
(2002) Laws of the iterated logarithm for the range of randow walks in
two and three dimensions. The Annals of Probability 30, 1369-1396.

BENJAMINI, I. - HAGGSTROM, 0. - PERES, Y. - STEIF, J. E.


(2003) Which properties of a random sequence are dinamically sensitive.
The Annals of Probability 31, 1-134.

BERKES, I.
(1972) A remark to the law of the iterated logarithm. Studia Sci. Math.
Hung. 7 , 189-197.

BICKEL, P. J. - ROSENBLATT, M.
(1973) On some global measures of the deviations of density function
estimates. The Annals of Statistics 1, 1071-1095.

357
358 REFERENCES

BILLINGSLEY, P.
(1968) Convergence of Probability Measures. J. Wiley, New York.

BINGHAM, N. H.
(1989) The work of A. N. Kolmogorov on strong limit theorems. Theory
of Probability and its Applications 34, 152-164.

BOLTHAUSEN, E.
(1978) On the speed of convergence in Strassen’s law of the iterated
logarithm. The Annals of Probability 6,668-672.
(1989) A central limit theorem for two-dimensional random walk in ran-
dom sceneries. The Annals of Probability 17,108-115.

BOOK, S. A. - SHORE, T. R.
(1978) On large intervals in the Csorg6 - RBvCsz Theorem on Increments
of a Wiener Process. 2. Wahrscheinlichkeitstheorie verw. Gebiete 46,
1-1 1.
BOREL, E.
(1909) Sur 16s probabilitCs dknombrables et leurs applications arithm6-
tiques. Rendiconti del Circolo Mat. d i Palermo 26,247-271.

BORODIN, A. N.
(1982) Distribution of integral functionals of Brownian motion. Zap.
Nauchn. Semin. Leningrad Otd. Mat. Inst. Steklova 119,13-38.
(1986/A,B) On the character of convergence to Brownian local time I,
11. Probab. Th. Rel. Fields 72,231-250, 251-277.

BROSAMLER, G. A.
(1988) An almost everywhere central limit theorem. Math. Proc. Camb.
Phil. SOC.104,561-574.

CHEN, X.
(2004) Exponential asymptotics and law of the iterated logarithm for
intersection local times of random walks. The Annals of Probability 32,
3248-3300.
CHEN, X. - LI, W.
(2004) Large and moderate deviations for intersection local times.
Probab. T h . Rel. Fields 128,213-254.

CHUNG, K. L.
(1948) On the maximum partial sums of sequences of independent ran-
dom variables. Trans. A m . Math. SOC.64,205-233.
REFERENCES 359

CHUNG, K. L. - ERDOS, P.
(1952) On the application of the Bore1 - Cantelli lemma. Trans. A m .
Math. SOC.72, 179-186.

CHUNG, K. L. - HUNT, G. A.
(1949) On the zeros of C: f l . Annals of Math. 50, 385-400.

COPPERSMITH, D. - DIACONIS, P.
(1987) Random walks with reinforcement. Stanford Univ. Preprint.

CSAKI, E.
(1978) On the lower limits of maxima and minima of Wiener process and
partial sums. 2. Wahrscheinlichkeitstheorie verw. Gebiete 43, 205-221.
(1980) A relation between Chung’s and Strassen’s law of the iterated
logarithm. 2. Wahrscheinlichkeitstheorie verw. Gebiete 54, 287-301.
(1989) An integral test for the supremum of Wiener local time. Probab.
Th. Rel. Fields 83,207-217.
(1990) A liminf result in Strassen’s law of the iterated logarithm. Coll.
Math. SOC.J . Bolyai: Limit Theorems in Probability and Statistics (ed.
I. Berkes, E. CsAki, P. RCvBsz) North-Holland, 57,83-93.

CSAKI, E. - CSORGO, M. - FOLDES, A. - REVESZ, P.


(1983) How big are the increments of the local time of a Wiener process?
The Annals of Probability 11,593-608.
(1989) Brownian local time approximated by a Wiener-sheet. The An-
nals of Probability 17,516-537.

CSAKI, E. - ERDOS, P. - REVESZ, P.


(1985) On the length of the longest excursion. 2. Wahrscheinlichkeits-
theorie verw. Gebiete 6 8 , 365-382.

CSAKI, E. - FOLDES, A.
(1984/A) On the narrowest tube of a Wiener process. Coll. Math. SOC.
J . Bolyai: Limit Theorems i n Probability and Statistics (ed. P. RCvBsz)
North-Holland, 36, 173-197.
(1984/B) The narrowest tube of a recurrent random walk. 2.
Wahrscheinlichkeitstheorie verw. Gebiete 66,387-405.
(1984/C) How big are the increments of the local time of a simple sym-
metric random walk? Coll. Math. SOC. J . Bolyai: Limit Theorems i n
Probability and Statistics (ed. P. RBvBsz) North-Holland, 36,199-221.
(1986) How small are the increments of the local time of a Wiener pro-
cess? The Annals of Probability 14,533-546.
360 REFERENCES

(1987) A note on the stability of the local time of a Wiener process.


Stochastic Processes and their Applications 25,203-213.
(1988/A) On the length of the longest flat interval. Proc. of the 5th
Pannonian Symp. on Math. Stat. (ed. Grossmann, W. - Mogyorbdi, J.
- Vincze, I. - Wertz, W.). 23-33.
(1988/B) On the local time process standardized by the local time at
zero. Acta Mathernatica Hungarica 52, 175-186.

CSAKI, E. - FOLDES, A. - KOMLOS, J.


(1987) Limit theorems for Erdtis - R h y i type problems. Studia Sci.
Math. Hung. 2 2 , 321-332.

CSAKI, E. - FOLDES, A. - REVESZ, P.


(1987) On the maximum of a Wiener process and its location. Probab.
T h . Rel. Fields 76,477-497.
(2005) Heavy points of a d-dimensional simple random walk. To appear.

CSAKI, E. - FOLDES, A. - REVESZ, P. - ROSEN, J. - SHI, Z.


(1998) A strong invariance principle for the local time difference of a
symple symmetric planar random walk. Studia Sci. Math. Hung. 34,
25-39.
(2005) Frequently visited sets for random walks. To appear.

CSAKI, E. - FOLDES, A. - REVESZ, P. - SHI, Z.


(1999) On the excursions of two-dimensional random walk and Wiener
process. Bolyai SOC.Math. Studies 9. (ed. P. RC.vCsz, B. T6th) 43-58.

CSAKI, E. - GRILL, K.
(1988) On the large values of the Wiener process. Stochastic Processes
and their Applications 27,43-56.

CSAKI, E. - REVESZ, P.
(1979) How big must be the increments of a Wiener process? Acta Math.
Acad. Sci. Hung. 33, 37-49.
(1983) A combinatorial proof of P. LCvy on the local time. Acta Sci.
Math. Szeged 45, 119-129.

CSAKI, E. - REVESZ, P. - ROSEN, J.


(1998) Functional law of iterated logarithm for local times of recurrent
random walks on Z2. Ann. Inst. Henri Poincare', Probabilite's et statis-
tiques 34, 545-463.
REFERENCES 36 1

CSAKI, E. - REVESZ, P. - SHI, Z.


(2000) Favourite sites, favourite values and jump sizes for random walk
and Brownian motion. Bernoulli 6,951-975.
(2001/A) A strong invariance principle for two-dimensional random walk
in random scenery. Stochastic Processes and their Applications 92,
181-200.
(2001/B) Long excursions of a random walk. J . of Theoretical Probability
14,821-844.

CSAKI, E. - VINCZE, I.
(1961) On some problems connected with the Galton test. Publ. Math.
Inst. Hung. Acad. Sci. 6,97-109.

CSORG6, M. - HORVATH, L.
(1989) On best possible approximations of local time. Statistics k3 Prob-
ability Letters 8,301-306.

CSORG6, M. - HORVATH, L. - REVESZ, P.


(1987) Stability and instability of local time of random walk in random
environment. Stochastic Processes and their Applications 25,185-202.

CSORG6, M. - REVESZ, P.
(1979/A) How big are the increments of a Wiener process? The Annals
of Probability 7,731-737.
(1979/B) How small are the increments of a Wiener process? Stochastic
Processes and their Applications. 8,119-129.
(1981) Strong Approximations in Probability and Statistics. AkadCmiai
Kiadb, Budapest and Academic Press, New York.
(1985/A) On the stability of the local time of a symmetric random walk.
Acta Sca. Math. 48,85-96.
(1985/B) On strong invariance for local time of partial sums. Stochastic
Processes and their Applications 20, 59-84.
(1986) Mesure du voisinage and occupation density. Probab. Th. Rel.
Fields 73,211-226.
(1992) Long random walk excursions and local time. Stochastic Processes
and their Applications 41,181-190.

DARLING, D. A. - ERDOS, P.
(1956) A limit theorem for the maximum of normalized sums of inde-
pendent random variables. Duke Math. J . 23, 143-155.

DAVIS, B.
(1989) Loss of recurrence in reinforced random walk. Technical Report,
Purdue University.
362 REFERENCES

(1990) Reinforced random walk. Probab. Th. Rel. Fields. 84,203-229.

DE ACOSTA, A.
(1983) Small deviations in the functional central limit theorem with
applications to functional laws of the iterated logarithm. The Annals of
Probability 11, 78-101.

DEHEUVELS, P.
(1985) On the ErdBs - RCnyi theorem for random fields and sequences
and its relationship with the theory of runs and spacings. 2. Wahrschein-
lichkeitstheorie verw. Gebiete 70, 91-115.

DEHEUVELS, P - DEVROYE, L. - LYNCH, I.


(1986) Exact convergence rates in the limit theorem of ErdBs - RCnyi
and Shepp. The Annals of Probability 14,209-223.

DEHEUVELS, P. - ERDOS, P. - GRILL, K. - REVESZ, P.


(1987) Many heads in a short block. Mathematical Statistics and Prob-
ability Theory, Vol. A., Proc. of the 6th Pannonian Symp. (ed. Puri,
M. L. - RCvCsz, P. - Wertz, W.) 53-67.

DEHEUVELS, P. - REVESZ, P.
(1986) Simple random walk on the line in random environment. Probab.
Th. Rel. Fields 72, 215-230.
(1987) Weak laws for the increments of Wiener processes, Brownian
bridges, empirical processes and partial sums of i.i.d.r.v.'s. Mathematical
Statistics and Probability Theory, Vol. A., Proc. of the 6th Pannonian
Symp. (ed. Puri, M. L. - RCvCsz,'P. - Wertz, W.) 69-88.

DEHEUVELS, P. - STEINEBACH, J.
(1987) Exact convergence rates in strong approximation laws for large
increments of partial sums. Probab. Th. Rel. Fields 76, 369-393.

DEMBO, A. - PERES, Y. - ROSEN, J. - ZEITOUNI, 0.


(2000) Thick points for spatial Brownian motion: multifractal analysis
of occupation measure. The Annals of Probability 28, 1-35.
(2001) Thick points for planar Brownian motion and the ErdBs - Taylor
conjecture on random walk. Acta Mathematica 186,239-270.
(2005/A) The largest disc covered by a planar random walk. To appear.
(2005/B) Cover time for Brownian motion and random walks in two
dimensions. To appear.

DOBRUSHIN, R. L.
(1955) Two limit theorems for the simplest random walk on a line. Uspehi
Math. Nauk (N. N) 10,139-146. In Russian.
REFERENCES 363

DONSKER, M. D. - VARADHAN, S. R. S.
(1977) On laws of the iterated logarithm for local times. Comm. Pure
Appl. Math. 30, 707-753.
(1979) On the number of distinct sites visited by a random walk. Comm.
Pure Appl. Math. 27,721-747.

DURRETT, R.
(1991) Probability: Theory and Examples. Wadsworth Brooks/Cole,
Pacific Grove, California.

DVORETZKY, A. - ERDOS, P.
(1951) Some problems on random walk in space. Proc. Second Berkeley
Symposium 353-367.

DVORETZKY, A. - ERDOS, P. - KAKUTANI, S.


(1950) Double points of Brownian paths in n-space. Acta Sci. Math.
Szeged 12,75-81.

ERDOS, P.
(1942) On the law of the iterated logarithm. Annals of Math. 43,
419-436.

ERDOS, P. - CHEN, R. W.
(1988) Random walks on Z,”. J. Multivariate Analysis 25,111-118.

ERDOS, P. - RENYI, A.
(1970) On a new law of large numbers. J . Analyse Math. 23, 103-111.

ERDOS, P. - REVESZ, P.
(1976) On the length of the longest head-run. Coll. Math. SOC. J .
Bolyai: Topics i n Information Theory (ed. Csiszk, I. - Elias, P.) 16,
219-228
(1984) On the favourite points of a random walk. Mathematical Struc-
tures - Computational Mathematics - Mathematical Modelling 2. Sofia,
152-157.
(1987) Problems and results on random walks. Math. Statistics and
Probability Theory, Vol. B., Proc. 6th Pannonian Symp. (ed. Bauer, P.
- Konecny, F. - Wertz, W.) D. Reidel, Dordrecht. 59-65.
(1988) On the area of the circles covered by a random walk. Journal of
Multivariate Analysis 27,169-180.
(1990) A new law of the iterated logarithm. Acta Math. Hung. 5 5 ,
125-131.
364 REFERENCES

(1991) Three problems on the random walk in Zd. Studia Sci. Math.
Hung. 26, 309-320.
(1997) On the radius of the largest ball left empty by a Wiener process.
Studia Sci. Math. Hung. 33, 117-125.

ERDOS, P. - TAYLOR, S. J.
(1960/A) Some problems concerning the structure of random walk paths.
Acta Math. Acad. Sci. Hung. 11, 137-162.
(1960/B) Some intersection properties of random walk paths. Acta
Math. Acad. Sci. Hung. 11, 231-248.

FELLER, W.
(1943) The general form of the so-called law of the iterated logarithm.
Trans. A m . Math. SOC.54, 373-402.
(1966) A n Introduction to Probability Theory and Its Applications,. Vol.
11. J. Wiley, New York.

FISHER, A.
(1987) Convex - invariant means and a pathwise central limit theorem.
Advances in Mathematics 63,213-246.

FLATTO, L.
(1976) The multiple range of two-dimensional recurrent walk. The An-
nals of Probability 4 , 229-248.

FOLDES, A.
(1975) On the limit distribution of the longest head-run. Matematikai
Lapok 26, 105-116. In Hungarian.
(1989) On the infimum of the local time of a Wiener process. Probab.
Th. Rel. Fields 8 2 , 545-563.

FOLDES, A. - PURI, M. L.
(1993) The time spent by the Wiener process in a narrow tube before
leaving a wide tube. Proc. Amer. Math. SOC.117,529-537.

FODES, A. - REVESZ, P.
(1992) On hardly visited points of the Brownian motion. Probability
Theory and Related Fields 91, 71-80.
(1993) Quadratic variation of the local time of a random walk. Statistics
& Probability Letters 17,1-12.

GNEDENKO, B. V. - KOLMOGOROV, A. N.
(1954) Limit Distributions for Sums of Independent Random Variables.
Addison - Wesley, Reading, Massachusetts.
REFERENCES 365

GOLOSOV, A. 0.
(1983) Limit distributions for random walks in random environments.
Soviet Math. Dokl. 28, 18-22.
(1984) Localization of random walks in one-dimensional random envi-
ronments. Commun. Math. Phys. 92,491-506.

GONCHAROV, V. L.
(1944) From the domain of Combinatorics. Izv. Akad. Nauk S S S R Ser.
Math. 8(1), 3-48.

GOODMAN, V. - KUELBS, J.
(1988) Rates of convergence for increments of Brownian motion. J.
Theor. Probab. 1, 27-63.

GORN, N. L. - LIFSCHITZ, M. A.
(1998) Chung’s law and CsAki function. J . Theor. Probab. 12,
399 - 420.
GRIFFIN, P.
(1990) Accelerating beyond the third dimension: Returning to the origin
in simple random walk. The Mathematical Scientist. 15,24-35.

GRILL, K.
(1987/A) On the rate of convergence in Strassen’s law of the iterated
logarithm. Probab. Th. Rel. Fields 74, 583-589.
(1987/B) On the last zero of a Wiener process. Mathematical Statistics
and Probability Theory, Vol. A . , Proc. 6th Pannonian Sump. (ed. Puri,
M. L. - RBvBsz, P. - Wertz, W.) D. Reidel, Dordrecht. 99-104.
(1991) On the increments of the Wiener processes. Studia Sci. Math.
Hung. 26,329-354.

GUIBAS, L. J. - ODLYZKO, A. M.
(1980) Long repetitive patterns in random sequences. 2. Wahrschein-
lichkeitstheorie verw. Gebiete 53, 241-262.

HAMANA, Y.
(1995) On the central limit theorem for multiple point range of random
walk. J . Fac. Sci. Uniu. Tokyo Sect. I A Math. 39,339-363.
(1997) The fluctuation result for the multiple point range of two-
dimensional recurrent random walks. The Annals of Probability 25,
598-639.
(1998) An almost sure invariance principle for the range of random walks.
Stochastic Process. Appl. 78,131-143.
366 REFERENCES

HANSON, D. L. - RUSSO, R. P.
(1983/A) Some results on increments of the Wiener process with appli-
cations t o lag sums of I.I.D. random variables. The Annals of Probability
11,609-623.
(1983/B) Some more results on increments of the Wiener process. The
Annals of Probability 11, 1009-1015.

HARTMAN, P. - WINTNER, A.
(1941) On the law of iterated logarithm. Amer. J. Math. 63,169-176.

HAUSDORFF, F.
(1913) Grundziige der Mengenlehre. Leipzig.

HIRSCH, W. M.
(1965) A strong law for the maximum cumulative sum of independent
random variables. Comm. Pure Appl. Math. 18, 109-217.

HOUGH, J. B. - PERES, Y.
(2005) An LIL for cover times of discs by planar random walk and Wiener
sousage. To appear.

HU, Y. - SHI, Z.
(1998/A) The local time of simple random walk in random environment.
J . Theoretical Probability 11, 765-793.
(1998/B) The limits of Sinai’s simple random walk in random environ-
ment. The Annals of Probability 26,1477-1521.

IMHOF, I. P.
(1984) Density factorizations for Brownian motion meander and the
three-dimensional Bessel process. J. Appl. Probab. 21, 500-510.

ITO, K.
(1942) Differential equations determining a Markoff process. Kiyosi It6
Selected Papers. Springer-Verlag, New York (1986), 42-75.

ITO, K. - MCKEAN Jr., H. P.


(1965) Diffusion processes and their sample paths. Die Grundlagen der
Mathematischen Wissenschaften Band 125. Springer-Verlag, Berlin.

JAIN, N. C. - PRUITT, W. E.
(1971) The range of transient random walk. J . Analyse Math. 24,
369-373.
(1972/A) The law of iterated logarithm for the range of random walk.
Ann. Math. Statist. 43,1692-1697.
REFERENCES 367

(1972/B) The range of random walk. Proc. Sixth Berkeley Symp. Math.
Statist. Probab. 3,Univ. California Press, Berkeley, 31-50.
(1974) Further limit theorems for the range of random walk. J . Analyse
Math. 27,94-117.

KALIKOW, S. A.
(1981) Generalized random walk in a random environment. The Annals
of Probability 9, 753-768.

KARLIN, S. - OST, F.
(1988) Maximal length of common words among random letter se-
quences. The Annals of Probability 16,535-563.

KESTEN, H.
(1965) An iterated logarithm law for the local time. Duke Math. J. 32,
447-456.
(1980) The critical probability of band percolation on iZ2 equals 1/2.
Comm. Math. Phys. 74,41-59.
(1986) The limit distribution of Sinai's random walk in random environ-
ment. Comm. Math. Phys. 138,299-309.
(1987) Hitting probabilities of random walks on Z d . Stochastic Processes
and their Applications 25, 165-184.
(1988) Recent progress in rigorous percolation theory. Aste'risque 157-
158,217-231.

KESTEN, H. - SPITZER, F.
(1979) A limit theorem related to a new class of self similar processes.
2. Wahrscheinlichkeitstheorie uerw. Gebiete 50,5-25.

KEY, E. S.
(1984) Recurrence and transience criteria for random walk in a random
environment. The Annals of Probability 12,529-560.

KHINCHINE, A.
(1923) Uber dyadische Briiche. Math. Zeitschrift 18, 109-116.

KHOSHNEVISAN, D.
(1994) Exact rates of convergence to Brownian local time. The Annals
of Probability 22,1295-1330.
(2002) Multiparameter Processes. Springer-Verlag, New York, Berlin,
Heidelberg.

KNIGHT, F. B.
(1981) Essentials of Brownian Motion and Diffusion Am. Math. SOC.,
Providence, R.I.
368 REFERENCES

(1986) On the duration of the longest excursion. Seminar on Stochastic


Processes, 1985. Birkhauser, Boston. 117-147.

KOLMOGOROV, A. N.
(1933) Grundbegriffe der Wahrscheinlichkeitsrechnung. Springer, Berlin.

KOMLOS, J. - MAJOR, P. TUSNADY, G.


~

(1975) An approximation of partial sums of independent R.V.’s and the


sample DF. I. 2. Wahrscheinlichkeitstheorie verw. Gebiete 32, 111-131.
(1976) An approximation of partial sums of independent R.V.3 and the
sample DF. 11. 2. Wahrscheinlichkeitstheorie verw. Gebiete 34, 33-58.

LACEY, M. T. - PHILIPP, W.
(1990) A note on the almost sure central limit theorem. Statistics d
Probability Letters 9, 201-205.

LAMPERTI, J.
(1977) Stochastic Processes. A Survey of the Mathematical Theory.
Springer - Verlag, New York.

LAWLER, G. F.
(1980) A self-avoiding random walk. Duke Mathematical Journal 47,
655-692.
(1991) Intersections of Random Walks. Birkhauser, Boston.
(1993) On the covering time of a disc by simple random walk in
two-dimensionals. Seminar on Stochastic Processes 1992 Birkhauser,
Boston. 33, 189-208.

LE GALL, J.-F.
(1986) PropriCtCs d’ intersections des marches alCatoires 1. Convergence
rers le temps local d’ intersection. Comm. Math. Phys. 104,471-507.
(1988) Fluctuation results for the Wiener sausage. The Annals of Prob-
ability 16, 991-1018.

LE GALL, J.-F. - ROSEN, J.


(1991) The range of stable random walks. The Annals of Probability 19,
650-705.
LEVY, P.
(1948) Processu Stochastique et Mouvement Brownien. Gauthier - Vil-
lars, Paris.

MAJOR, P.
(1988) On the set visited once by a random walk. Probab. Th. Rel.
Fields 77,117-128.
REFERENCES 369

MARCUS, M. B. - ROSEN, J.
(1997) Laws of iterated logarithm for intersections of random walks on
Z4. Ann. Inst. H. Poincare‘ Probab. Statist. 33, 37-63.
MCKEAN Jr, H. P.
(1969) Stochastic Integrals. Academic Press, New York.

MOGUL’SKII, A. A.
(1979) On the law of the iterated logarithm in Chung’s form for func-
tional spaces. Th. of Probability and its Applications 24, 405-412.

MORI, T.
(1989) More on the waiting time till each of some given patterns occurs
as a run. Preprint.

MUELLER, C.
(1983) Strassen’s law for local time. 2. Wahrscheinlichkeitstheorie verw.
Gebiete 63, 29-41.

NEMETZ, T - KUSOLITSCH, N.
(1982) On the longest run of coincidences. 2. Wahrscheinlichkeitstheorie
verw. Gebiete 61, 59-73.

NEWMAN, D.
(1984) In a random walk the number of “unique experiences” is two on
the average. S I A M Review 26, 573-574.

OREY, S. - PRUITT, W. E.
(1973) Sample functions of the N-parameter Wiener process. The A n -
nals of Probability 1, 138-163.

ORTEGA, I. - WSCHEBOR, M.
(1984) On the increments of the Wiener process. 2. Wahrscheinlichkeit-
stheorie verw. Gebiete 65, 329-339.

PERKINS, E.
(1981/A) A global instrinsic characterization of Brownian local time.
The Annals of Probability 9 , 800-817.
(1981/B) On the iterated logarithm law for local time. Proc. Amer.
Math. SOC.81, 470-472.

PERKINS, E. - TAYLOR, S. J.
(1987) Uniform measure results for the image of subsets under Brownian
motion. Probab. Th. Rel. Fields 76, 257-289.
370 REFERENCES

PETROV, V. V.
(1965) On the probabilities of large deviations for sums of independent
random variables. Th. of Probability and its Applications 10,287-298.

PETROWSKY, I. G.
(1935) Zur ersten Randwertaufgabe der Warmleitungsgleichung. Comp.
Math. B. 1, 383-419.

PITT, J. H.
(1974) Multiple points of transient random walk. Proc. Amer. Math.
SOC.43, 195-199.

POLYA, G.
(1921) Uber eine Aufgabe der Wahrscheinlichkeitsrechnung betreffend
die Irrfahrt im Strassennetz. Math. Ann. 84,149-160.

QUALLS, G. - WATANABE, H.
(1972) Asymptotic properties of Gaussian processes. Annals Math.
Statistics 43,580-596.

RENYI, A.
(1970/A) Foundations of Probability. Holden-Day, San Francisco.
(1970/B) Probability Theory. AkadCmiai Kiad6, Budapest and North-
Holland, Amsterdam.

REVESZ, P.
(1978) Strong theorems on coin tossing. PTOC. Int. Cong. of Mathe-
maticians, Helsinki.
(1979) A generalization of Strassen's functional law of iterated logarithm.
2. Wahrscheinlichkeitstheorie verw. Gebiete 50,257-264.
(1981) Local time and invariance. Lecture Notes in Math.: Analytical
Methods i n Probab. Th. 861,128-145.
(1982) On the increments of Wiener and related process. The Annals of
Probability 10,613-622.
(1988) In random environment the local time can be very big. SociCtL.
Mathkmatique de France, Aste'risque 157-158,321-339.
(1989) Simple symmetric random walk in Zd. Almost Everywhere Con-
vergence. Proceedings of the Int. Conf. on Almost Everywhere Conver-
gence (ed. G. A. Edgar, L. Sucheston) Academic Press, Boston. 369-392.
(1990/A) Estimates of the largest disc covered by a random walk. The
Annals of Probability 18,1784-1789.
(1990/B) On the volume of spheres covered by a random walk. A tribute
to Paul Erd6s (ed. A. Baker, B. Bollobb, A. Hajnal) Cambridge Univ.
Press. 341-347.
REFERENCES 371

(1991) Waiting for the coverage of the Strassen's set. Studia. Sci. Math.
Hung. 26, 379-391.
(1992) Black holes on the plane drawn by a Wiener process. Probability
Theory and Related Fields 93,21-37.
(1993/A) Clusters of a random walk on the plane. The Annals of Prob-
ability 21, 318-328.
(1993/B) Covering problems. Theory of Probability and its Applications
38,367-379.
(1993/C) A homogenity property of the 2
' random walk. Acta Sci.
Math. (Szeged) 57,477-484.
(1996) Balls left empty by a critical branching Wiener process. J. of
Applied Math. and Stochastic Analysis 9,531-549.
(2000) On the inverse local time process of a plane random walk. Peri-
odica Math. Hung. 41,227-236.
(2004) The maximum of the local time of a transient random walk.
Studia Sci. Math. Hung. 41, 379-390.

RGVESZ, P. - SHI, Z.
(2000) Strong approximation of spatial random walk in random scenery.
Stochastic Processes and their Applications 88,329-345.

RIESZ, F. - SZ. NAGY, B.


(1953) F'unctional Analysis. Frederick Ungar, New York.

ROSEN, J.
(1997) Laws of the iterated logarithm for triple intersections of three-
dimensional random walks. Electron. J. Probab. 2, 1-32.

SAMAROVA, S. S.
(1981) On the length of the longest head-run for a Markov chain with
two states. Th. of Probability and its Applications 26, 489-509.

SCHATTE, P.
(1988) On strong versions of the central limit theorem. Math. Nachr.
137,249-256.

SHAO, Q. M.
(1995) On a conjecture of R6vCsz. Proc. A m . Math. SOC.123,575-582.

SHI, Z. - TOTH, B.
(2000) Favourite sites of simple random walks. Periodica Math. Hung.
41, 237-249.
372 REFERENCES

SIMONS, G.
(1983) A discrete analogue and elementary derivation of “LCvy’s equiva-
lence” for Brownian motion. Statistics B Probability Letters 1, 203-206.

SINAi, JA. G.
(1982) Limit behaviour of one-dimensional random walks in random en-
vironment. Th. of Probability and its Applications 27,247-258.

SKOROHOD, A. V.
(1961) Studies in the Theory of Random Processes. Addison - Wesley,
Reading, Mass.

SOLOMON, F.
(1975) Random walks in random environment. The Annals of Probability
3, 1-31.

SPITZER, F.
(1958) Some theorems concerning 2-dimensional Brownian motion.
Ransuctions of the A m . Math. SOC.87, 187-197.
(1964) Principles of Random Walk. Van Nostrand, Princeton, N.J.

STRASSEN, V.
(1964) An invariance principle for the law of iterated logarithm. 2.
Wahrscheinhhkeitstheorae verw. Gebiete 3, 211-226.
(1966) A converse to the law of the iterated logarithm. 2. Wahrschein-
lichkeitstheorie uerw. Gebiete 4, 265-268.

SZABADOS, T.
(1989) A discrete It6 formula. Coll. Math. SOC. J. Bolyai: Limit The-
orems in Probability and Statistics (ed. I. Berkes, E. Csiiki, P. RBvCsz)
North-Holland 491-502.

SZEKELY, G. - TUSNADY, G.
(1979) Generalized Fibonacci numbers, and the number of “pure heads”.
Matematikai Lapok 27, 147-151. In Hungarian.

TOTH, B.
(1985) A lower bound for the critical probability of the square lattice
site percolation. Z. Wahrscheinlichkeitstheorie verw. Gebiete 69, 19-22.
(1995) The (‘true’’ self-avoiding walk with bond repulsion on Z: limit
theorems. The Annals of Probability 23, 1523-1556.
(1996/A) Multiple covering of the range of a random walk on Z (On
a question of P. Erdos and P. RCvCsz). Studia Sci. Math. Hung. 31,
355-359.
REFERENCES 373

(1996/B) Generalized Ray - Knight theory and limit theorems for self-
interacting random walks on Z'. The Annals of Probability 24,
1324-1367.
(1997) Limit theorems for weakly reinforced random walks on Z.Studia
Sci. Math. Hung. 33, 321-337.
(1999) Self-interacting random motions - A survey. Bolyai SOC. Math.
Studies 9. (ed: P. RBvBsz, B. T6th) 349-384.
(2001) No more than three favorite sites for simple random walk. The
Annals of Probability 29, 484-503.

TOTH, B. -WERNER, W.
(1997) Tied favourite edges for simple random walk. Combinatorics,
Probability and Computing 6, 359-396.

TROTTER, H. F.
(1958) A property of Brownian motion paths. Illinois J. of Math. 2,
425-433.
WEIGL, A.
(1989) Zwei Satze uber die Belegungszeit beam Random Walk. Diplomar-
beit, TU Wien.

WICHURA, M.
(1977) Unpublished manuscript.

ZIMMERMANN, G.
(1972) Some sample function properties of the two-parameter Gaussian
process. Ann. Math. Statistics 43, 123551246,
This page intentionally left blank
Author Index

Adelman, 0. 279 Donsker, M. D. 126, 224


Andjel, E. D. 348, 349 Durett, R. 14
Auer, P. 245, 251, 264, 266, 297 Dvoretzky, A. 209, 211, 221, 242

Bktfai, P. 53, 77 Erd&, P. 17, 18, 34, 39, 53, 59, 62, 65,
Bass, R. F. 157, 222, 225 67, 77, 79, 121, 135-139, 157,
Benjamini, I. 60 158, 160, 180, 194, 209, 211,
Berkes, I. 32 213, 215, 219, 220, 221, 241,
Bickel, P. J. 180 242, 243, 245, 272, 278
Billingsley, P. 16
Bingham, N. H. 38 Feller, W. 19, 20, 34
Bolthausen, E. 92, 351 Fisher, A. 140
Book, S. A. 71 Flatto, L. 220
Borel, E. 28, 29 Foldes, A. 18, 21, 74, 81, 82, 107, 118-
Borodin, A. N. 107, 110, 351 123, 127, 134, 139, 155, 166,
Brosamler, G. A. 140, 141 167, 169, 172-175, 227, 236, 238
Burdzy, K. 279
Gnedenko, B. V. 19
Golosov, A. 0. 345, 346, 347
Chen, R. W. 62
Goncharov, V. L. 21
Chen, X. 243, 244
Goodman, V. 94
Chung, K. L. 72, 112, 118, 121, 135, 136
Gorn, N. L. 94
Coppersmith, N. 354
Griffin, P. 157, 199, 233
CsBki, E. 18, 41, 45, 71, 72, 74, 81, 82,
Grill, K. 18, 45, 68, 70, 71, 79, 92, 176,
88, 93, 94, 107, 112-115, 118-
178, 179
123, 125, 127, 137-139, 152,
Guibas, L. J. 59, 62
159, 166, 167, 169, 172-176,
179, 227, 236, 238, 281, 2 hSHHDYTP,.
285, 351 hSMANA, y. 220, 225, 235
cSORGO, m. 66, 73, 110., 119, 120HANSON,
126 d. l. 72, 1911
141, 163, 164, 166, 167, 185,hART,AM. p. 53
H17, 318 hOVATH, l. 110, 317, 318
Hough, J. B. 245
Darling, D. A. 180 Hu, Y. 328, 334
Davis, B. 354 Hunt, G. A. 112, 118
De Acosta, A. 93
Deheuvels, P. 18, 64, 75, 79, 80, 326, Imhof, 1. p. 175
327 ItB, K. 141, 183
Dembo, A. 219, 240, 246, 263, 277
Devroye, L. 80 Jain, N. C. 222, 225
Diaconis, P. 354
Dobrushin, R. L. 129, 130 Kakutani, S. 242

375
376 AUTHOR INDEX

Kalikow, S. A. 349 Pruitt, W. E. 208, 222, 225


Karlin, S. 66 Puri, M. L. 155
Kesten, H. 112, 118, 296-298, 346, 350,
351 Qualls, G. 180, 182
Key, E. S. 347, 348
RCnyi, A. 13, 14, 19, 20, 28, 53, 67, 77,
Khinchine, A. 30
Khoshnevisan, D. 107, 108, 146, 164 101
RCvCsz, P. 17, 18, 39, 59, 63, 65, 66, 71-
Knight, F. B. 52, 111, 139, 204, 205
73, 75, 79, 88, 91, 109, 112-115,
Kolmogorov, A. N. 19, 25, 34, 38
119, 120, 125, 126, 134, 137,
Komlos, J. 18, 53, 54
138, 141, 157-160, 163, 164,
Kuelbs, J. 94
166, 167, 172-175, 185, 227,
Kumagai, T. 222, 225
235, 236, 238, 245, 246, 256,
Kusolitsch, N. 66
263, 264, 266, 269, 272, 278,
281, 283, 285, 317, 318, 326,
Lacey, M. T. 141
327, 337, 351
Lamperti, J. 293
Riesz, F. 85
Lawler, G. F. 195, 242, 244, 246
Rosen, J. 219, 236, 238, 240, 244, 246,
Le Gall, J. F. 225, 226
263, 281, 283
LCvy, P. 33, 104, 111, 140, 141
Rosenblatt, M. 180
Li, W. 243
Russo, R. P. 72, 181
Lifschitz, M. A. 94
Lynch, I. 80 Sarnarova, S. S. 59
Schatte, P. 140
Major, P. 53, 54, 161 Shao, Q. M. 182
Marcus, M. B. 244 Shi, Z. 158, 159, 236, 238, 263, 283, 285,
McKean Jr, H. P. 141, 185 328, 334, 351
Mogul’skii, A. A. 116 Shore, T. R. 71
Mori, T. 62, 63 Simons, G. 113, 115
Mueller, C. 95, 126 Sinai, JA, G. 314, 345, 346
Skorohod, A. V. 53
Nemetz, T. 66 Solomon, F. 311
Newrnan, D. 161 Spitzer, F. 27, 205, 206, 248, 350, 351
Steinebach, J . 80
Odlyzko, A . M. 59, 62
Steif, J. E. 60
Orey, S. 208
Strassen, V. 32, 88, 89
Ortega, I. 68, 72 Szabados, T. 183
Ost, F. 66 SzCkely, G. 18
Sz.-Nagy, B. 85
Pemantle, R. 279
Peres, Y. 60, 219, 240, 245, 246, 263 Taylor, S. J. 139, 194, 209, 211, 213,
Perkins, E. 119, 141, 240 215, 219, 220, 240, 241, 242, 243
Petrov, V. V. 66 T6th, B. 158, 159, 162, 297, 355
Petrowsky, I. G. 34 Trotter, H. F. 105
Philipp, W. 141 TusnBdy, G. 18, 53, 54
Pitt, J. H. 220
Polya, G. 23, 193 Varadhan, S. R. 126, 224
A UTHOR INDEX 377

Vincze, I. 113 Wintner, A. 53


Wschebor, M. 68, 72
Watanabe, H. 180, 183
Weigl, A. 141
Werner, W. 159 Zeitouni, 0. 219, 240, 246, 263
Wichura, M. 95, 126 Zimmermann, G. 163
This page intentionally left blank
Subject Index

Arcsine law 104 Random walk in random environment


Asymptotically deterministic se- definition 303, 348
quence 34 local time 313, 330, 335, 337
maximum 313, 325, 327, 345
Bernstein inequality 13 recurrence 311
Bore1 - Cantelli lemma 27 Random walk in random scenery 350
Brownian motion 9 Random walk in Z'
definition 9
Central limit theorem 19 excursion 146, 147
Chebyshev inequality 28 favourite sites 157
first recurrence 97
Dirichlet problem 293 increments 57, 77
DLA model 296 increments of the local time 123
law of the iterated logarithm 31
EFKP LIL 34 law of the large numbers 28
local time 98, 117, 146,
Gap method 29
location of the last zero 102, 136,
175
Invariance principle 52, 109, 203
location of the maximum 102, 104,
It6 formula 183
136, 171
It8 integral 183
longest run 21, 57
Large deviation theorem 14, 19 longest zero-free interval 135
Levy classes 33 maximum 14, 20, 31, 35, 41
LIL of Hartman - Wintner 32 maximum of the absolute value 20,
31, 35, 41, 171
LIL of Khinchine 31
Logarithmic density 140 mesure du voisinage 141
Long head-runs 57 number of crossings 100, 113
range 44
Markov inequality 28 rarely visited points 157, 161
Method of high moments 29 recurrence 23
Strassen type theorems 83, 90, 124
Normal numbers 29 Random walk in Z d
almost covered discs 264
Ornstein - Uhlenbeck process 179 completely covered balls 272
completely covered discs 245, 263
Percolation 297 definition 192
excursions 281, 284
Quasi asymptotically deterministic se- first recurrence 211
quence 34 heavy points 227
heavy balls 236
Rademacher functions 10, 11 law of the iterated logarithm 207

379
380 SUBJECT I N D E X

local time 211, 218 excursion 139, 141


maximum 206, 209 increments 66
range 221 increments of the local time 119
rate of escape 209 local time 104, 109,
recurrence 193 location of the last zero 179
self-crossing 241 location of the maximum 175
speed of escape 288 longest zero-free interval 121
Strassen type theorems 204 maximum 53
Reflection principle 15 maximum of the absolute value 53
Reinforced random walk 353 mesure du voisinage 141
occupation time 155
Skorohod embedding scheme 53 Strassen type theorems 84, 90, 92,
Stirling formula 19 124
Wiener process in
Tanaka formula 185 definition 203
Theorem of Bore1 29 law of the iterated logarithm 207
Theorem of Chung 39 maximum 206
Theorem of Donsker and Varadhan 125 rate of escape 209
Theorem of Hausdorff 30 self-crossing 241
Theorem of Hirsch 39 Strassen type theorems 204
Wiener sausage 226
Wichura's theorem 95 Wiener sheet 163
Wiener process in R'
definition 51 Zero-one law 25

You might also like