0% found this document useful (0 votes)
19 views31 pages

Autocorrelation

The document discusses autocorrelation in the context of the Classical Linear Regression Model (CLRM), highlighting its nature, causes, and consequences. It explains how autocorrelation can arise from inertia, specification bias, data manipulation, and transformation, and emphasizes its impact on the efficiency and reliability of ordinary least squares (OLS) estimators. Additionally, it outlines methods for detecting autocorrelation, including the Durbin-Watson test and graphical methods.

Uploaded by

Rafin Max
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views31 pages

Autocorrelation

The document discusses autocorrelation in the context of the Classical Linear Regression Model (CLRM), highlighting its nature, causes, and consequences. It explains how autocorrelation can arise from inertia, specification bias, data manipulation, and transformation, and emphasizes its impact on the efficiency and reliability of ordinary least squares (OLS) estimators. Additionally, it outlines methods for detecting autocorrelation, including the Durbin-Watson test and graphical methods.

Uploaded by

Rafin Max
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 31

VIOLATION OF CLRM ASSUMPTIONS:

AUTOCORRELATION

SS|FINANCE|RU
Chapter 10
AUTOCORRELATION: NATURE
• We assumed about the CLRM’s errors that 𝑐𝑜𝑣 𝑢 , 𝑢 = 0 for 𝑖 ≠ 𝑗. This is
essentially the same as saying there is no pattern in the errors.
• If this assumption is no longer valid, then the disturbances are pairwise
autocorrelated (or serially correlated).
•This means that an error occurring at period t may be carried over to the next
period 𝑡 + 1. Graphically, if there are patterns in the residuals from a model, we
say that they are autocorrelated.
• Autocorrelation is most likely to occur in time series data. In cross-sectional
data we can change the arrangement of the data without altering the results.
• Obviously, we never have the actual 𝑢’s, so we use their sample counterpart,
the residuals (the 𝑢’s).

SS|FINANCE|RU
…AUTOCORRELATION: NATURE

Figures (a) to (d) show a distinct


pattern among the 𝑢̂ ’s while (e)
shows no systematic pattern, which
is the geometric counterpart of the
assumption of no autocorrelation.

SS|FINANCE|RU
WHAT CAUSES AUTOCORRELATION?
Inertia:
• Time series such as GNP, price indexes, production, employment and unemployment
exhibit business cycles. Starting at the bottom of the recession, when economic
recovery starts, most of these series start moving upward. It continues until something
happens to slow them down (i.e., increase in interest rate or taxes or both). Therefore,
in regressions involving time series data, successive observations are likely to be
interdependent.
Specification Bias: Excluded Variables Case
• One factor that can cause autocorrelation is omitted variables. Suppose 𝑌 is related
to 𝑋 and 𝑋 , but we wrongfully do not include 𝑋 in your model.
• The effect of 𝑋 will be captured by the disturbances 𝑢 . If 𝑋 like many economic
series exhibit a trend over time, then 𝑋 depends on 𝑋 ,𝑋 and so on. Similarly,
then 𝑢 depends on 𝑢 , 𝑢 and so on.
SS|FINANCE|RU
…WHAT CAUSES AUTOCORRELATION?
Specification Bias: Incorrect Functional Form
Suppose 𝑌 is related to 𝑋 with a quadratic relationship: 𝑌 = 𝛽 + 𝛽 𝑋 + 𝑢 but we
wrongfully assume and estimate a straight line: 𝑌 = 𝛽 + 𝛽 𝑋 + 𝑢 . Then the error
term obtained from the straight line will depend on 𝑋 .

SS|FINANCE|RU
Cobweb Phenomenon
The supply of many agricultural commodities reflects cobweb phenomenon, where
supply reacts to price with a lag of one time period because supply decisions take time
to implement. Thus supply function is:
𝑆𝑢𝑝𝑝𝑙𝑦 = 𝛽 + 𝛽 𝑃 + 𝑢
• Here disturbances are not expected to be random because if the farmers overproduce
in year t, they are likely to reduce their production in t+1, and so on, leading to a
Cobweb pattern.
…WHAT CAUSES AUTOCORRELATION?

Data Manipulation
In empirical analysis the raw data are often “massaged” in a process referred to
as data manipulation. For example, in time series regressions involving
quarterly data, such data are often derived from the monthly data by simply
adding three monthly observations and dividing the sum by 3.
• This averaging introduces “smoothness” into the data by dampening the

SS|FINANCE|RU
fluctuations in the monthly data.
• Therefore, the graph plotting the quarterly data looks much smoother than
the monthly data, and this smoothness can itself lend to a systematic pattern in
the disturbances, thereby inducing autocorrelation.
• Interpolation, extrapolation also creates time series to be autocorrelated.
…WHAT CAUSES AUTOCORRELATION?

SS|FINANCE|RU
• Data Transformation
• Consider the model: 𝑌 = 𝛽 + 𝛽 𝑋 + 𝑢 →level form (a)
Where 𝑌 =Consumption expenditure; 𝑋 =Income
• Since model (a) holds at time 𝑡 it also holds true at 𝑡 − 1. Therefore, we can re-write
(a) as follows:
𝑌 =𝛽 +𝛽 𝑋 +𝑢 → lagged form of (a) (b)
•𝑌 , 𝑋 , and 𝑢 are known as lagged values of 𝑌 , 𝑋 , and 𝑢 , respectively, lagged
by one period.
• Subtracting (a) from (b): ∆𝑌 = 𝛽 ∆𝑋 + ∆𝑢 , →differenced form of (a) (c)
where ∆ is known as 1st difference operator.
• Where, ∆𝑌 = 𝑌 − 𝑌 , ∆𝑋 = 𝑋 − 𝑋 , and ∆𝑢 = 𝑢 − 𝑢
…WHAT CAUSES AUTOCORRELATION?

•For example, if in (b) Y and X represent the logarithms of consumption


expenditure and income.
Then in (c), ∆𝑌 and ∆𝑋 will represent changes in the logs of consumption

SS|FINANCE|RU

expenditure and income. But as we know, a change in the log of a variable is a
relative change, or a percentage change, if the former is multiplied by 100.
•So, instead of studying relationships between variables in the level form, we
may be interested in their relationships in the growth form.
•Now if the error term in (a) satisfies the standard OLS assumptions,
particularly the assumption of no autocorrelation, it can be shown that the
error term ∆𝑢 in (c) is autocorrelated.
EMPIRICAL AUTOCORRELATION

• The autocorrelation is measured by, for example, the relationship between


successive values, values with a time lag of two periods, three periods, etc.. One
speaks in this connection of an autocorrelation of the 1st order, 2nd order, 3rd order,

SS|FINANCE|RU
etc.
• The empirical first-order autocorrelation coefficient reads as:
∑ 𝑢̂ × 𝑢̂
𝜌̂ =
∑ 𝑢̂
• In general, the correlation coefficient of the 𝑗 − 𝑡ℎ order is given by :
∑ 𝑢̂ × 𝑢̂
𝜌̂ =
∑ 𝑢̂
EMPIRICAL AUTOCORRELATION…
• In a regression of the share prices of a stock A (Y) on the stock index (X) the
following residuals arose:
How large is the
𝑡 1 2 3 4 5 6 7 8 autocorrelation

SS|FINANCE|RU
𝑢̂ 0.6 0.5 -0.4 -0.7 -0.3 0.3 0.4 -0.4 coefficient?

𝑡 𝑢̂ 𝑢̂ 𝑢̂ × 𝑢̂ 𝑢̂
1 0.6 - - 0.36
2 0.5 0.6 0.30 0.25 ∑ 𝑢̂ × 𝑢̂ 𝑢̂ =𝜌𝑢 +𝑣
𝜌̂ =
3 -0.4 0.5 -.20 0.16 ∑ 𝑢̂
4 -0.7 -0.4 0.28 0.49
0.46
5 -0.3 -0.7 0.21 0.09 = = 0.261
1.76
6 0.3 -0.3 -0.09 0.09
7 0.4 0.3 0.12 0.16
8 -0.4 0.4 -0.16 0.16
Σ 0 - 0.46 1.76
CONSEQUENCES OF AUTOCORRELATION
1. The least squares estimators are still linear and unbiased. But they are not
efficient; that is, they do not have minimum variance. In short, the usual ordinary
least squares (OLS) estimators are not best linear unbiased estimators
(BLUE).
2. The estimated variances of OLS estimators are biased. Sometimes the usual
formulas to compute the variances and standard errors of OLS estimators seriously
underestimate true variances and standard errors, thereby inflating or deflating t
values. Therefore, the usual t and F tests are not generally reliable.
3. The usual formula to compute the error variance, namely, = is a biased
estimator of the true error variance. Therefore, the conventionally computed 𝑅 may

SS|FINANCE|RU
be an unreliable measure of true 𝑅 .
4. The conventionally computed variances and standard errors of forecast may also
be inefficient.
DETECTING AUTOCORRELATION
• As we can see, these consequences are similar to those of heteroscedasticity,

SS|FINANCE|RU
and just as serious in practice. Therefore, as with heteroscedasticity, we must find out if we
have the autocorrelation problem in any given application.
• We do not know what the true 𝑢 𝑠 are, we only have estimated 𝑢 𝑠 and if they are correlated,
we do not know what the true mechanism that has generated them in a real situation.
• From basic macroeconomics, one would expect a positive relationship between real wages and
(labor) productivity—ceteris paribus, the higher the level of labor productivity, the higher the
real wages. Regressing real wages on productivity, we obtain the following results-

𝑅𝑒𝑎𝑙 𝑤𝑎𝑔𝑒𝑠 = 33.63 + 0.6614 𝑃𝑟𝑜𝑑𝑢𝑐𝑡𝑖𝑣𝑖𝑡𝑦 As expected, there is a positive relationship between real
se = (1.4001) (0.0156) wages and productivity. The estimated tratios seem quite
t = (24.0243) (42.2928)
high and the 𝑅 value is quite high. Before we accept
𝑟 = 0.9749; d = 0.1463
these results at their face value, we must check the
Note: d refers to the Durbin-Watson statistic possibility of autocorrelation.
that will be discussed later.
DETECTING AUTOCORRELATION
• To test for autocorrelation, we consider three methods: (1) the graphical
method, which is comparatively simple, (2) the popular Durbin-Watson d statistic, (3)
the runs test, and (4) the Breusch-Godfrey Test.
Draw ‘residuals’ against time or (𝒆𝒕 𝟏) against (𝒆𝒕 ) The Graphical Method
A simple visual examination of
OLS residuals, e’s, can give

SS|FINANCE|RU
valuable insight about the likely
presence of autocorrelation
among the error terms, the 𝑢 𝑠 .
Now there are various ways of
examining the residuals. Such
plots are called a time-sequence
plot.
THE DURBIN-WATSON 𝒅 TEST
The Durbin-Watson (DW) is a test for first order autocorrelation - i.e., it
assumes that the relationship is between an error and the previous one as
follows:

SS|FINANCE|RU
𝑢 = 𝜌𝑢 +𝑣 (1)
where 𝑣 ~𝑁 0, 𝜎 .
• The DW test statistic actually tests:

𝐻 : 𝜌 = 0 and 𝐻 : 𝜌 ≠ 0

• The test statistic is calculated by 𝐷𝑊 = ∑

• A great advantage of the d (DW) statistic is that it is based on the estimated


residuals, which are routinely computed in regression analysis.
…THE DURBIN-WATSON 𝒅 TEST 𝜌 = +1, 𝑑 = 2 × 0 = 0
𝜌 = −1, 𝑑 = 2 × 2 = 4
𝜌 = +0.60, 𝑑 = 0.8 positive autocorrelation
𝜌 = −0.60, 𝑑 = 3.2 negative autocorrelation
• We can also write:
𝑑 ≈ 2 1 − 𝜌̂ (2)
where 𝜌̂ is the estimated correlation coefficient. Since 𝜌̂ is a correlation, it

SS|FINANCE|RU
implies that −1 ≤ 𝜌̂ ≤ +1.
• Rearranging for 𝑑 from (2) would give 0 ≤ 𝑑 ≤ 4.
• If 𝜌̂ = 0, 𝑑 = 2. So roughly speaking, do not reject the 𝐻 if 𝑑 is near 2  i.e.,
there is no evidence of autocorrelation.
• Unfortunately, DW has 2 critical values, an upper critical value 𝑑 and a
lower critical value 𝑑 , and there is also an intermediate region where we can
neither reject nor not reject 𝐻 .
…THE DURBIN-WATSON 𝒅 TEST

SS|FINANCE|RU
…THE DURBIN-WATSON 𝒅 TEST
• Although it is now routinely used, it is important to note the assumptions
underlying the d statistic.
1. The regression model should include an intercept term. If it is not present, as in the
case of the regression through the origin, it is essential to rerun the regression including
the intercept term to obtain the RSS.
2. The explanatory variables, the 𝑋’s, are nonstochastic, or fixed in repeated sampling.
3. The disturbances 𝑢 are generated by the first-order autoregressive scheme:
𝑢 = 𝜌𝑢 + 𝜀 . Therefore, it cannot be used to detect higher-order autoregressive
schemes.
4. The error term 𝑢 is assumed to be normally distributed.
5. There are no missing observations in the data.

SS|FINANCE|RU
6. In the main regression model, lagged dependent variable cannot be included as an
independent variable.
THE RUNS TEST (APPENDIX 10A)
• To explain this test, simply note the sign (+ or −) of the residuals obtained from the estimated
regression.
• Suppose in a sample of 20 observations, we obtained the following sequence of residuals:

SS|FINANCE|RU
(++)(− − − − − − − − − − − − −)(+++++) (3)
• We now define a run as an uninterrupted sequence of one symbol or attribute, such as + or −.
We further define the length of the run as the number of elements in the run.
• In the sequence shown in Equation (3), there are 3 runs—a run of 2 pluses (i.e., of
length 2), a run of 13 minuses (i.e., of length 13), and a run of 5 pluses (i.e., of length 5).
• By examining how runs behave in a strictly random sequence of observations, we can derive a
test of randomness of runs.
• The question we ask is: Are the 3 runs observed in our example consisting of 20 observations
too many or too few compared with the number of runs expected in a strictly random sequence of
20 observations?
• If there are too many runs, it means that the residuals change sign frequently, thus suggesting
negative autocorrelation. Similarly, if there are too few runs, it suggests positive autocorrelation.
…THE RUNS TEST (APPENDIX 10A)

SS|FINANCE|RU
Now let N = total number of observations (= 𝑁 + 𝑁 )
𝑁 = number of + symbols (i.e., + residuals)
𝑁 = number of − symbols (i.e., − residuals)
k = number of runs
• The runs tests has the 𝐻 : successive observations are independent against the 𝐻 :
successive observations are not independent .
• The number of runs k is asymptotically (i.e., in large samples) follows the normal
distribution, thus, we can use the Z-test:
( )
• 𝑍= =

• Where, mean: 𝐸 𝑘 = + 1, and variance: 𝜎 =


• At 5% significance level a 𝑍-value of 1.96 or higher indicates, we reject the 𝐻 of
randomness.
… THE RUNS TEST (APPENDIX 10A)

SS|FINANCE|RU
• In our example, 𝑁 = 20, 𝑁 = 7, 𝑁 = 13, k = 3
× ×
• 𝐸𝑘 = +1 = + 1 = 10.1
× × × ×
• 𝜎 = = ≅ 3.88
( ) . .
• 𝑍= = = = 5.052
. .

• We can reject the 𝐻 of randomness or independence of our given


sequence of numbers.
…THE BREUSCH-GODFREY TEST (APPENDIX 10B)
• A test of autocorrelation that is more general than some of the tests discussed so far is one

SS|FINANCE|RU
developed by statisticians Breusch and Godfrey.
• This test allows for (1) dynamic regressors, such as the lagged values of the dependent variables,
(2) higher-order autoregressive schemes, such as AR(1), AR(2), etc., and (3) simple or higher-order
moving averages of the purely random error terms, such as , 𝑒 , 𝑒 , etc.
• To illustrate, how BG test works: (1) run the dividend model in (EQ. 4, in next slide) and obtain
the residuals, 𝑒 . (2) Run the following Auxiliary Regression:
𝑒 = 𝐴 + 𝐴 ln𝐶𝑃 + 𝐴 𝑇𝑖𝑚𝑒 + 𝐶 𝑒 + 𝐶 𝑒 + ⋯ + 𝐶 𝑒 + 𝑣 (5)
• That is, regress the residual at time t on the original regressors, including the intercept and the
lagged values of the residuals up to time 𝑡 − 𝑘 , the value of k being determined by trial and
error or based on Akaike or Schwarz information criteria. Obtain the 𝑅 value of this regression.
• Calculate 𝑛𝑅 , that is, obtain the product of the sample size n and the 𝑅 value obtained from
(5). Under the null hypothesis that all the coefficients of the lagged residual terms are
simultaneously equal to zero, it can be shown that in large samples: 𝑛𝑅 ~𝜒 , where k indicates
number of lagged residual terms.
…THE BREUSCH-GODFREY TEST (APPENDIX 10B)
*For a total of 244 quarterly observations on

SS|FINANCE|RU
dividends paid (DP) and corporate profits (CP),
consider the following regression equation:
ln𝐷𝑃 = 𝐵 + 𝐵 ln𝐶𝑃 + 𝐵 𝑇𝑖𝑚𝑒 + 𝑢 (4)
*The time or trend variable is included in the
model to allow for the upward trend in the two-
time series. The model gives the elasticity of
dividends w.r.t. to profits, or if multiplied by 100,
the percent growth in dividends over time.
*The elasticity of dividends with respect to
corporate profits is about 0.42 and the dividends
have been increasing at the quarterly rate of about
1.26 percent.
The result looks good except DW statistic.
… THE BREUSCH-GODFREY TEST (APPENDIX 10B)

SS|FINANCE|RU
As you can see, 𝑛𝑅 = 222.54. The
probability of obtaining a chi-square
value of as much as 222.54 or greater
for 3 d.f. is practically zero. Therefore,
we can reject the hypothesis that
𝑪𝟏 = 𝑪𝟐 = 𝑪𝟑 = 𝟎.
The difference between BG and DW
tests is that the BG test considers
the higher order autocorrelations, on
the contrary, DW test only considers
first-order autocorrelation.
CORRECTING FOR AUTOCORRELATION
• The remedy of autocorrelation, depends upon what knowledge we have
or can assume about the nature of interdependence in the error terms 𝑢 .
consider the two-variable regression model:

SS|FINANCE|RU
𝑌 = 𝛽 +𝛽 𝑋 +𝑢 (6)
• Now, assume that the error term follows an AR(1) scheme as follows:
𝑢 = 𝜌𝑢 + 𝑒 , where −1 < ρ < 1 (7)

• Now we consider two cases:


(1) ρ is known and
(2) ρ is not known and has to be estimated.
CORRECTING FOR AUTOCORRELATION: WHEN 𝝆 IS KNOWN…
•If the coefficient of first-order autocorrelation is known, the problem of
autocorrelation can be easily solved. If (6) holds true at time t, it also
holds true at time (t − 1). Hence,
𝑌 = 𝛽 +𝛽 𝑋 +𝑢 (8)

Multiplying (8) by ρ on both sides, we obtain


𝜌𝑌 = 𝜌𝛽 + 𝜌𝛽 𝑋 + 𝜌𝑢 (9)

Subtracting (9) from (6) gives

(𝑌 − 𝜌𝑌 ) = 𝛽 1 − 𝜌 + 𝛽 (𝑋 − 𝜌𝑋 ) + 𝑒 , where 𝑒 = 𝑢 − 𝜌𝑢 (10)
We can express (10) as:
𝑌∗ = 𝛽∗ + 𝛽∗𝑋∗ + 𝑒 (11)
where 𝛽∗ = 𝛽 1 − 𝜌 , 𝑌∗ = (𝑌 − 𝜌𝑌 ), 𝑋∗ = (𝑋 − 𝜌𝑋 ), and 𝛽∗ = 𝛽 .

SS|FINANCE|RU
CORRECTING FOR AUTOCORRELATION: WHEN 𝝆 IS KNOWN…

•Since the error term in (11) satisfies the usual OLS assumptions, we can apply
OLS to the transformed variables 𝑌∗ and 𝑋∗ and obtain estimators with all the
optimum properties, namely, BLUE.
•In effect, running (11) is called GLS (generalized least squares) which is nothing,
but OLS applied to the transformed model that satisfies the classical
assumptions.
• Regression (10) is known as the generalized, or quasi, difference equation.
It involves regressing Y on X, not in the original form, but in the difference
form, which is obtained by subtracting a proportion (= 𝜌) of the value of a
variable in the previous time period from its value in the current time.

SS|FINANCE|RU
CORRECTING FOR AUTOCORRELATION: WHEN𝝆IS NOT KNOWN…
• Although conceptually straightforward to apply the method of generalized difference given in
(9), it is difficult to implement because ρ is rarely known in practice.
• 𝝆 = 1, The First-Difference Method: Since ρ lies between 0 and ±1, one could start from two
extreme positions. At one extreme, one could assume that ρ = 0, that is, no (first-order) serial
correlation, and at the other extreme we could let ρ = ±1 , that is, perfect positive or negative
correlation.
• As a matter of fact, when a regression is run, one generally assumes that there is no
autocorrelation and then lets the Durbin–Watson or other test show whether this assumption is
justified. If, however, ρ = +1 , the generalized difference equation (10) reduces to the first-
difference equation:
𝑌 −𝑌 = 𝛽 𝑋 −𝑋 + 𝑢 −𝑢 or ∆𝑌 = 𝛽 ∆𝑋 + 𝜀 (12)
where ∆ is the first-difference operator. Since the error term in (12) is free from (1st order) serial
correlation (why?), to run the regression (12) all one has to do is form the first differences of both

SS|FINANCE|RU
the regressand and regressor(s) and run the regression on these first differences
𝝆 ESTIMATED FROM DURBIN-WATSON 𝒅 STATISTIC
• Recall the approximate relationship between the d statistic and 𝜌 :
𝐷𝑊 ≈ 2 1 − 𝜌̂ from which we can obtain:

SS|FINANCE|RU
𝜌̂ ≈ 1 − (13)

Since the d statistic is now routinely computed by most regression packages, we can
easily obtain an approximate estimate of 𝜌 from Equation (13).
• Once it is estimated from d as shown in (13), we can then use it to run the generalized
difference equation (9) for the wages-productivity example, in which . Therefore,
0.1463
𝜌̂ ≈ 1 − = 0.9268
2
We can use this value to transform the data as in (10). This method of transformation is
easy to use and generally gives good estimates if the sample size is reasonably large.
𝑅𝑒𝑎𝑙 𝑤𝑎𝑔𝑒𝑠 = 33.63 + 0.6614 𝑃𝑟𝑜𝑑𝑢𝑐𝑡𝑖𝑣𝑖𝑡𝑦
se = (1.4001) (0.0156)
t = (24.0243) (42.2928)
𝑟 = 0.9749; d = 0.1463
𝝆 ESTIMATED FROM OLS RESIDUALS
𝑅𝑒𝑎𝑙 𝑤𝑎𝑔𝑒𝑠 = 33.63 + 0.6614 𝑃𝑟𝑜𝑑𝑢𝑐𝑡𝑖𝑣𝑖𝑡𝑦
se = (1.4001) (0.0156)
t = (24.0243) (42.2928)
𝑟 = 0.9749; d = 0.1463

SS|FINANCE|RU
• Recall the first-order autoregressive scheme, AR(1):
𝑢 = 𝜌𝑢 + 𝑣 (14)
• Since the 𝑢 𝑠 are not directly observable, we can use their sample
counterparts, i.e., the estimated residuals, and run the following regression:
𝑢 = 𝜌̂𝑢 + 𝑣 (15)
• Statistical theory shows that 𝜌̂ is a biased estimator of true 𝜌 in small
sample sizes and bias tends to disappear with the increase in sample size.
Hence, if the sample size is reasonably large, we can use 𝜌̂ obtained from
(14).
… 𝝆 ESTIMATED FROM OLS RESIDUALS
We can use the data to run the regression as
given in Table 10-2 of the textbook, and the
results of the regression (14) are as follows:
𝑒̂ = 0.8915𝑒

SS|FINANCE|RU
𝑠𝑒 = 0.0552 , 𝑟 = 0.85
Thus, 𝜌̂ = 0.89
we can then use the value of 𝜌̂ in (10) for
the wages-productivity example to
transform the data and run the regression,
which will be autocorrelation free.

Estimated residuals obtained from


wage-productivity regression model and
its various transformations (Table 10.2).
BEST OF LUCK!

You might also like