RPdefinitions
RPdefinitions
In a similar way, a collection of pdfs fXc1 ,...,Xck (xc1 , . . . , xck ) for all choices of c1 , . . . , ck and for all orders k uniquely denes a continuous-valued process if the pdf are consistent in the sense that
fXc1 ,...,Xck ,Xck+1 (xc1 , . . . , xck , xck+1 ) dxck+1 = fXc1 ,...,Xck (xc1 , . . . , xck ).
A collection of pmfs pXc1 ,...,Xck (xc1 , . . . , xck ) for all choices of c1 , . . . , ck and for all orders k uniquely denes a discrete-valued process if the pmfs are consistent in the sense pXc1 ,...,Xck ,Xck+1 (xc1 , . . . , xck , xck+1 ) = pXc1 ,...,Xck (xc1 , . . . , xck ).
xck+1
The standard relation between these functions (cdf,pmf,pdf) behaves as described earlier in the course for nite dimensional random vectors). Example: The discrete-time process described above can be dened by the collection of k-order pmfs P (Xc1 = xc1 , . . . , Xck = xc1 ) = 1 10k
for all k and for all c1 , . . . , ck {0, 1, . . . , 9}. Note that the pmfs are consistent with each other (in the sense described above). Denition 1. A process is said to have independent increments if for any k and for all choice of indices t1 < < tk the random variables Xt2 Xt1 , . . . , Xtk Xtk1 are independent (this denition holds for both discrete-time and continuous-time). Denition 2. A process is said to be a Markov process if for any k and for all choice of indices t1 < < tk fXtk |Xt1 =xt1 ,...,Xtk1 =xtk1 (xtk ) = fXtk |Xtk1 =xtk1 (xtk ) for continuous-time processes pXtk |Xt1 =xt1 ,...,Xtk1 =xtk1 (xtk ) = pXtk |Xtk1 =xtk1 (xtk ) for discrete-time processes (1) (2)
A process with independent increments is necessarily Markov but not vice-verse. Recall that for vector RV the mean and covariance are vectors and matrices. The analogue for processes become functions. Note however, the inappropriate (but conventional) denition of autocorrelation in light of our previous denition of correlation. The mean of a random process X is the function mX (t) = E(Xt ) The autocorrelation of a random process X is the function RX (t1 , t2 ) = E(Xt1 Xt2 ) The auto-covariance of a random process X is the function CX (t1 , t2 ) = E((Xt1 mX (t1 ))(Xt2 mX (t2 )). By the results we have for covariance we also have CX (t1 , t2 ) = RX (t1 , t2 ) mX (t1 )mX (t2 ). The variance at time t of the process is CX (t, t) Example: For a RV Y dene a process Xt = Y cos(2t). The mean is mX (t) = E(Y cos(2t)) = E(Y ) cos(2t) and the auto-correlation is RX (t1 , t2 ) = E(Y cos(2t1 )Y cos(2t2 )) = E(Y 2 ) cos(2t1 ) cos(2t2 ) and the autocovariance is CX (t1 , t2 ) = RX (t1 , t2 ) mX (t1 )mX (t2 ) = (E(Y 2 ) (E(Y ))2 cos(2t1 ) cos(2t2 ) If we have two processes X , Y we have similar denitions for their interaction: The two processes are said to be independent if all the nite dimensional marginals of the two processes (Xt1 , . . . , Xtk ), (Yt1 , . . . , Ytl ) are two vector RVs (independent from each other). The cross correlation is RX ,Y (t1 , t2 ) = E(Xt1 Yt2 ). It the cross correlation is identically zero the two processes are said to be orthogonal. The cross-covariance of the two processes is CX ,Y (t1 , t2 ) = E((Xt1 mX (t1 ))(Yt2 mY (t2 )).