Testsfor Structural Breaksin Time Series Analysis AReviewof Recent Development
Testsfor Structural Breaksin Time Series Analysis AReviewof Recent Development
net/publication/336699358
CITATIONS READS
0 3,462
2 authors, including:
Uma Maheswari T.
Lady Doak College
1 PUBLICATION 0 CITATIONS
SEE PROFILE
All content following this page was uploaded by Uma Maheswari T. on 21 October 2019.
Month: September
T.Uma Maheswari
Assistant Professor, Department of Economics, Lady Doak College, Madurai, Tamil Nadu, India
66 http://www.shanlaxjournals.in
Shanlax
International Journal of Economics
Chow test with multiple regimes and it covered Where Y1 and Y2 and ε1 and ε2 are column
less than k subsamples, which were done by Dufour vectors with n elements, X is a non-singular matrix,
(1982). Simultaneous equations of chow test were and β is the column vector of the ρ regression
conducted by Lo and Newey (1985). Andrews coefficients. The Chow (1960) test for a structural
and Fair (1988) studied general nonlinear models break and the further procedure is Fisher statistics
with analysis of variance test. Quandt test with (1970). However, it is considered being stationary
simple linear regression models (intercept changes (parameters are constant over time) with a
with alternative and intercept and slope changes) single break and a statement about parameters.
Kim and Siegmund (1989). Quandt test has been Subsequently, this was followed in linear regression
extended to linear regression with lagged dependent with k observations and vector of n1 and n2 by using
variables by Ploeger, Karmer, and Alt (1988, 1989). ordinary F test respectively. The Chow test has a
Linear regression with serially correlated errors was long history, this could be obtained in different ways
reviewed by Kao and Ross (1992). Canterll et al. but widely used approaches stated by Rao (1952),
(1991); Dufour, Glyssels, and Hall (1994) applied Kullback and Rosenblatt (1957), Rao (1965) and
the general nonlinear models with the predictive test. Dufour (1982). First, the Chow test was associated
Hansen (1995) provided detail p-values. Overall, it with the analysis of variance test that is n1> k, n2 > k.
has been allocated for the following formation: Subsequently, it is used to find parameter constancy.
• Check for known breakpoints Second, it is the predictive test that is n1> k, n2 < k,
• Test for unknown breakpoints basically predictive test does not find the stability of
• Test for unknown multiple breakpoints the coefficient, but it does find the unbiasedness of
The present paper is performed in the following n1 and n2. Third, is the fundamental theorem of least
manner. The second section describes the traditional squares test and it has extended multiple regimes
(Chow test) experiment with known breakpoints. with k samples. Subsequently, some subsample is
The third section elaborates the analysis of unknown covered less than knot all. However, Maddala (1998)
structural breaks, and the fourth section deals with stated that the motivation behind the chow test was to
multiple strange structural breaks. Finally, the extend the analysis of variance test and the predictive
discussion follows. test.
of fundamental breakthrough linear regression, falsely exists before unknown features. However,
including chow test. Toyoda (1974) has empirically there was a different answer which was observed in
verified the accuracy of the chow test under the similar break dates, when he chose 1973 as a break
conditions of heteroskedasticity in which any one of date which had no structural break and then there
the sample sizes is very large. The level of significance was a structural break in 1975. Because of different
test could affect as two sample sizes remain smaller. answers, he treated structural breaks as unknown, for
Under heteroscedasticity, the level of significance that he suggested Quandt test. However, in recent
always becomes larger. Subsequently, Schmidt years, various authors have solved the problem and
and Sickles (1977) investigated the accuracy and given a practical solution to the Quandt test.
evidence of a chow test with Toyoda mechanism.
They concluded that Toyoda finding somehow was Test of Unknown Structural Breaks
inaccurate, and two sample sizes and the variances The late 1970s, the literature on structural break
are different in each. Lo and Newey (1985) and were directed towards the detection of parameter
Park (1991) implied an analysis of variance test to instability, or parameter changes occurred at
simultaneous equations. Andrews and Fair (1988) an unknown time. It emphasis particularly on
have adopted the analysis of variance test to general parameters instability in dynamic models with
nonlinear econometric models. They have introduced trending regressors, co-integrated variables,
a Wald test, Lagrange Multiplier-like test, and heteroskedasticity disturbances, and perhaps Unit
Likelihood Ratio test. Their results have shown a weak root (Bai 1993). There are various studies which
regulatory condition of heteroskedasticity. Dufour, focused on theoretical and empirical specification,
Glyssels, and Hall (1994) had approached predictive those started with Quandt (1960), Farely and
analysis for structural stability and extended general Hinich (1970), Brown et al. (1975), Ploberger
nonlinear dynamic simultaneous equation models. (1983), Ploberger and Kramer (1990;1992), Perron
This study also accounted for large subsample before (1989; 1991), Andrews (1990), Zivot and Andrews
the structural break; subsequently, structural changes (1992), Hansen (1992), Chu and White (1992), Bai,
in the second part are unknown. Recently Hansen Lurnsdaine and Stock (1991), Banerjee, Lumsdaine
(2001) empirical exploration stated that what would and Stock (1992) and Christiano (1992). Testing of
happen if parameters change in the fundamental unknown structural break or change points can be
change: dating breaks in the United States labor identified in single and multiple structural breaks.
productivity. He used the data from February 1947 When the structural break is known, then the Chow
to April 2001, and he employed a simple first-order test is more powerful. Subsequently, when the
autoregressive dynamic model and focused on three structural breaks are unknown, it does not require
aspects. Firstly, he focused on the unknown timing any prior knowledge about the structural change
of the structural break. Secondly, an estimation of the which appeared in its timing, type, and shift. The
timing of structural breaks and finally distinguishes major contribution of the unknown structural break
between random walk and broken time trend. In this is discussed below in details. The synoptic views of
investigation, he found that there was substantial the single unknown structural break were compiled
evidence of structural break between 1992 and 1996 in the chapter, which was contributed by Vilares J
and weaker evidence of the structural break in the (1986); Stock J.H and Watson (2010) and Maddala
1960s and early 1980s. Hansen confined that the (1998). Vilares J (1986) classified that there are three
Chow test has two choices; first was the arbitrary important tests on a single unknown structural break.
break date and second was endogenous break date. The number of studies which employed the general
In that there are limitations: first, the Chow test could structure that is
be uninformative and exact break date might not
be identified. The second statement of endogenous
break date may have a chance of correlated with data,
albeit Chow test could be misleading and break date Where time t = 1, 2,…..T (t=t0=Z0=zt) and the
68 http://www.shanlaxjournals.in
Shanlax
International Journal of Economics
explanatory variable of both the subset are same that F-statistics, and it would be applicable for a larger
is x1t = x2t = xt therefore k1 = k2 =k. sample. It indicates that the QLR statistics has an
The first and foremost modified version of immense majority of rejecting the null hypothesis
the Chow test is called Quandt Likelihood Ratio rather F-statistics, while there are multiple discrete
Statistics. Quandt (1960) employed the tests of the breaks which were encountered. Subsequently, the
hypothesis that a linear regression system obeys two first trimming mechanism was offered by Quandt
separate regimes. In that, the entire observation was 1960; the critical value of QLR statistics trimming
split into two subsets. Over the entire samples, the a standard range of 15 percent. Kim and Siegmund
unknown time m observation predicted from one (1989) proposed likelihood ratio tests to detect a
regime and forwarding observations come from the change point (broken line) in a simple regression
other subset. However, Quandt took the initiative of model. They had started with the question of when
change point analysis in a time-varying regression the alternative specifies that only the intercept
model. This mechanism was applied based on changes or option permits both the intercept and
Switching Regression Model (SRM) and Quantity slope changes. Approximation for the significance
Rationing Models (QRM). This test procedure level in the model allowed intercept change. With the
was considered to predict the significance of the help of the inversion of the likelihood ratio tests and
maximum value of the likelihood ratio statistic while model intercepts and slopes can be obtained through
employing recursive switching models. It can be the confidence region and joint confidence region for
considered basing on Max Chow test. General the change point. Given yi = α0 + β0 xi (i=1,......m),
formulation of Switching Regression Model is one if the change point j (i ≤ j) and an approximate joint
of the deterministic assignments which is as follows confidence region for j (i> j), the difference α0 –
α1, and β; to select model between change point
without covariates and then model without change
point. Kim and Siegmund stated that their procedure
Where it is a row vector of ρ variables of known probably is satisfactory if the x’s are random and
constant (which may belong to X1t and X2t) π there is a loss of accuracy in estimating α0 – α1,
is a column vector of ρ unknown constant. The empirically caused by β and xi for I ≤ j and me> j
explanatory variable of both the subset is the same which has disjoined support. Subsequently, Maddala
that is x1t = x2t = xt. Quandt model has two sets of (1998) pointed out the above procedure, which was
functions that are hindered by the lack of distribution theory. Poririer
Lαt0=-T2ln2π-t0lnσ1-T-t0lnσ2-12σ12t=1t0(YT- (1976) stated that the appropriate likelihood ratio
xtβ1)2-12σ22t=t0+1T(YT-xtβ2)2 …(6) function does not hold their standard regularity
Where α is a vector of an estimated parameter condition and Quandt distribution of 2 ln R observed
(β’1, β’2, σ1σ2) in 2k+2, the above function has a sparse approximation.
allowed calculating the estimated α ̂ for α that is During 1970 FH test, which was proposed by
a function of t0: α ̂ = α ̂ (t0). The other is above Farley and Hinich (1970), can also be applied to
function α has replaced by α ̂ the result function has the general formation model (2). Their primary
t0 to replaced by t0. assumption was considered as (t = 1, 2,….T) which
had an equal chance of occurrence switch point t0 to
each t observation.
When time t0 was known than the model (2) had
T =1,2…..T total sample of the regression model framed follows
and σ2 has estimated variance. However, t0 is not yt = xt β1 +vtδ+εt …………(8)
a continuous variable than the calculation of the Where time t=1, 2,……T, vt is a vector of (1x k)
likelihood ratio become problematic. and δ = β2– β1. However, t0 was not known then FH
More interestingly the critical value of the QLR test can be written (instead of vt by txt)
statistics provided more extensive distribution than yt = xt (β1 +δt) + εt …………(9)
69 http://www.shanlaxjournals.in
Shanlax
International Journal of Economics
The above procedure applied in the model (2) to extended the CUSUM test with lagged dependent
test the null hypothesis, then δ is equal to 0. In general, variables. Ploberger, Kramer, and Alt (1989)
the validity of the FH test probably identifies the had implied the local power of the CUSUM test
general information than the exact specification of t0 against heteroskedasticity. For the power problem,
information by Vilares J (1986). Farely et al. (1975) they proposed a fluctuation test (in relation with
suggested pseudo chow test is also useful. In that, successive parameter estimated rather than recursive
they assumed the breakpoint t0 could have occurred residuals). It was first suggested by Ploberger in
midpoint of t, and the above procedure was applied. 1983. Ploberger and Kramer (1990; 1992) extended
Poirier (1976) used the Monte-Carlo technique to the CUSUM and CUSUM squared test in the linear
begin Investigation and also tested the above three regression model with lagged dependent variables
procedures. Basing on which he concluded that and local power of CUSUM. They adopted a dynamic
the likelihood ratio does not determine its result. linear regression model to show the structural shift,
However, when the breakpoint is wider, or the and also they proposed the fluctuation test rather
sample is larger than the above three tests were not than recursive residual on parameter estimation.
applicable, which were stated by Maddala (1998). However, a structural shift occurred lately in their
CUSUM and CUSUM of Squared test were one sample. They proved that there was a drawback in
of the essential classical tests which were suggested the CUSUM test, which obtained an asymptotically
by Brown et al. (1975). They applied recursive negative coefficient of its regression. But it does
residuals to test single structural break over time not induce heteroskedasticity of its disturbances
while parameters changed. rather than on constant coefficients. Subsequently,
the CUSUM squared test showed asymptotically
identical. Westlund and Tornkvist (1989)
experimented with CUSUM, and CUSUM squared
test. They were interested in testing the structural
stability, for that they employed test statistics and
Monte Carlo technique. Parameters estimation of
Where t = k+1……T. CUSUMt is the recursive the test statistics varied differently, and Monte
residual and is based on plotting against t. Under Carlo technique experienced the minimal possibility
the null hypothesis, β is constant. The CUSUM has of generalization. Overall, CUSUM and CUSUM
zero mean, and variance is proportional to t–k–1. squared test statistics was not known for varying
If the null hypothesis is rejected, then the recursive parameters.
residual crosses the boundary for some t. However, Another class of tests is Andrews (1993) Sup F
the CUSUM test aims to detect precise movements test. He considered a test for parameter instability
of coefficients. and one-time structural change with unknown
The tendency of a disproportionate number of change point. This study has nontrivial asymptotic
recursive residual to have the same sign, it indicates local power against all alternatives for which the
that the coefficients are not constant, and the parameters are non-constant or where the change
recursive residuals cross the boundary, as stated by point (structural break) which were unknown.
Baltagi (2011). The cumulative sum of the squared However, the structural breaks were known; one
test was used in the squared recursive residual and can form specifically Walt, Lagrange Multiplier
which is based on the plotting against t. Under the models with no deterministic or stochastic trends.
null hypothesis β= (n-k)/(T-k) which varies from 0 Also, to that Andrews proposed Likelihood ratio-like
to 1. 0 for n=k and 1 for n=T. If the null hypothesis test based on the Generalised Method of Moments
is rejected, then the squared recursive residual (GMM) estimators with nonlinear models. Also,
crosses the boundary and determines the level he provided an asymptotic critical value that is Sup
of the test by Maddala (1998); Zivot (2003) and F test Maddala (1998). Here the standard method
Baltagi (2011). Ploberger, Kramer, and Alt (1988) employed by Andrews.
70 http://www.shanlaxjournals.in
Shanlax
International Journal of Economics
71 http://www.shanlaxjournals.in
Shanlax
International Journal of Economics
term at time t. They treated breakpoints as explicitly The above model has expressed in the matrix
unknown that is T1, T2,............Tm. T is observed as form
unknown regression coefficient yt, xt, zt. However, Y = X β0 + Z0+ U .......................15
the parameter β was not subjected to shift where the Where, yt is (y1…..yT)’, X is (x1, …..xT)’, U is
coefficient (p=0). Therefore, all the coefficients are (u1………uT)’, δ is (δ’1, δ’2, ….. δ’m+1)’ and Z is
subjected to change in a pure structural change model at m-partition (T1…….Tm) and its diagonal matrix
and for more details, see Bai and Perron (1994). They is Z that is Z1…….Zm+1 with Zi is equal to zTi-
have used US ex-post real interest rate quarterly data 1+1.,…….zTi)’. They treated 0 as superscript, which
from 1961:1 to 1986:3. For empirical verification, denotes the true value of the parameters.
they allowed up to 5 segments and identified two To estimate the unknown regression coefficient
breaks dates (1972:3 and 1980:3) estimation under (β0, δ01, …………. δ0m+1, T01……T0m assumed
global minimization. However, it is useful for the that δ01 ≠ δ0i≠1 (1≤k≤m). an unknown number of
treatment of linear regression models with multiple breaks are treated for the real value of m0discrete
structural breaks. They did not allow a convergence shift.
rate of sequential estimators but estimated the For the least-squares principle-minimizing
convergence rate of breakpoints. This study follows sum of squared residualsi=1m+1t=Ti-1+1Ti(yt-
the approach of limiting distribution of break x’t β-z’tδi)2which are obtained as the coefficient β
dates (for global minimization behavior and social and δj and T1…..Tm which denotes {Tj} for each
behavior) Bai and Perron (1995b) and sequential m-partitions. ST (T1……Tm) is the resulting sum
estimation of multiple breaks, Bai and Perron (1997). of squared residuals and the estimated breakpoints
Finally, Justin Bai, the dissent that when parameter T……Tm
β was not subjected to shift then this study does not
allow T-consistent yt, xt and zt convergence rate of
sequential estimators or breakpoints and for details
of sequential estimation of multiple breaks see Bai m–Partition (T1,........Tm)which overall
and Perron(1997). To estimate multiple breaks, there minimization to Ti – Ti-1 ≥ q. The first issue in the
are various procedure to follow. Those procedures Bai-Perron test is to find all possible breakpoints and
are discussed next. acceptable segments. Dynamic programming is used
First is to employ an appropriate model based in obtaining breakpoints. It is considered by Bai and
on Bai and Perron continuous argument. For that Perron (1998 and 2003). Table 3.1 and 3.2 present
Bai and Perron (1998) employed multiple linear triangular matrices of sums of squared residuals
regression with m breaks and m+1 regime with T=25, h=5, and m=2. These Tables show the
yt = x’t β + z’tδj +ut......................14 possible breakpoints and a calculation of acceptable
Where yt is the independent variable. xt and zt is segments, respectively.
the vector of covariates; xt (px1) and zt (qx1).(t=Tj- Bai and Perron (1998 and 2006) have shown
1+1 …..Tj (j=1…….m+1 T0=0 and Tm+1 =T used two tests of the null hypothesis, that is, no structural
for the convention). The unknown breakpoints were difference against an unknown number of breaks
T1….Tm. β and δ is the vector of coefficients that given some upper bound M. The first is considered
is (j=1…..m+1) and error term ut. The regression double maximum checks. There are two subsets
coefficients are (yt, xt, and zt) with T observations. within this: an equal-weight version (UDMax) and
β is constant because they used partial structural the second test finds individual weights which give
change model. When all the coefficient are subjected equal marginal p-values across m (WDMax). The
to change, then it could be a pure structural change detailed account is given here.
model with p=0.
72 http://www.shanlaxjournals.in
Shanlax
International Journal of Economics
3 xa xa xa xa xc xc xc xc xc xc xc xc xc xb xb xb xb xb xb xb xb xb xb
4 xa xa xa xa xc xc xc xc xc xc xc xc xb xb xb xb xb xb xb xb xb xb
5 xa xa xa xa xc xc xc xc xc xc xc xb xb xb xb xb xb xb xb xb xb
6 xa xa xa xa * * * * * * * * * * * xb xb xb xb xb
7 xa xa xa xa * * * * * * * * * * xb xb xb xb xb
8 xa xa xa xa * * * * * * * * * xb xb xb xb xb
9 xa xa xa xa * * * * * * * * xb xb xb xb xb
10 xa xa xa xa * * * * * * * xb xb xb xb xb
11 xa xa xa xa * * * * * * * * * * *
12 xa xa xa xa * * * * * * * * * *
13 xa xa xa xa * * * * * * * * *
14 xa xa xa xa * * * * * * * *
15 xa xa xa xa * * * * * * *
16 xa xa xa xa * * * * * *
17 xa xa xa xa * * * * *
18 xa xa xa xa * * * *
19 xa xa xa xa * * *
20 xa xa xa xa * *
21 xa xa xa xa *
22 xa xa xa xa
23 xa xa xa
24 xa xa
25 xa
Source: Bai and Perron (2003)
Notes: The vertical number indicates the initial • xb shows a section not found since otherwise
date of a segment while the horizontal number there would be no place for 3 parts of period 5.
indicates the terminal data. For example, the entry • xc indicates a portion not considered since
(4, 10) shows a section that starts on year 4 and ends otherwise there would be no place for a section of
at date 10, hence having seven observations. range 5 prior it.
• xa indicates a segment not considered since it • A* indicates an admissible segment
must be at least of length 5.
73 http://www.shanlaxjournals.in
Shanlax
International Journal of Economics
In general trimming (h) is not necessary to fix to q. The asymptotic equal version is
However, trimming has chosen independently based
on the number of regressors exists. There are two
instances pertain to the nature of error distribution However, the significance level of WD max FT
about the regressors based on the assumption. First (M, q) have chosen weights themselves based on
is when the regressors contain no lagged values, a, unlike UD max FT (M, q) and based on Bai and
then the residuals (error term ut) permits substantial Perron the critical values and the trimming is equal
correlation and heteroskedasticity. Second is when to .05, .10 and .15 then 5 breaks are allowed. If the
the regressors contain lagged values; then the trimming is equal to .20, then 3 breaks are allowed,
residuals permit no serial correlation. Subsequently, and the trimming is equal to .25, then the break is 2.
those two instances allow different distributions for Apart from the double maximum tests,
regressors and the error distribution across segments information criteria such as BIC and LWZ were used,
(Bai and Perron 2006). for identifying the maximum number of breaks. In the
For constructing a confidence interval, Bai-Perron presence of serial correlation, another information
strategy was to adopt the asymptotic framework criterion– AIC does not perform well. BIC does
where the size/magnitude of the shifts converge to not perform very well in the presence of a lagged
zero leads to a sample size increase and break dates dependent variable. LWZ performs better under the
based on asymptotic distribution. For the coefficient null hypothesis of no break but underestimates the
covariance, Bai and Perron used HAC estimator, number of breaks when some are presents.
which is first-order autoregressive approximation Another class of test is the sequential tests of L
with Quadratic Spectral Kernal, Andrews bandwidth versus L+1 breakpoints. It is primarily focused on
[each element of the vector {ztut} and over segment/ the difference between the sum of squared residuals
vector {ztut}]. obtained by L versus and L+1. This test applied
The second test, WDMax, has marginal p-values each observations Ti-1 to T1(i=1,......L+1) and
which are equal across m values. They have implied each segment as well. This test is not necessary to
weight on q, and then significance level is on a, then use global sum of squared residuals toTbut it has
the test precisely followed c (q, a, m). However, sup focused on break fractions that isλi= TiT, and it has
(λ1, ......,λm)∈Ʌ∈FT (λ1,......,λm; q) is the asymptotic converged actual value of T. Break dates selected
critical value. Subsequently, when a1 is equal to 1 based on the overall minimum amount, that is
and form is greater than 1 as am is equal to c (q, a, where the model L+1 shows total minimum value
1)/c (q, a, m) are the weights then it is denoted by of the sum of squared and it is sufficiently smaller
than the model L. Bai and Perron (2006, p 19–20)
has recommended the use of sequential L versus
L+1 approach for tracing the breakpoints. The exact
74 http://www.shanlaxjournals.in
Shanlax
International Journal of Economics
75 http://www.shanlaxjournals.in
Shanlax
International Journal of Economics
(cor_u =0, het_u =0); case two is no serial correlation The second test, marginal p-values which are
in the errors and different variance for the residuals equal across m values. They have implied weight
across sectors (cor_u =0, het_u =1). Imposing typical on q, and then significance level is on a, then the
distribution (het_z=0) regressors across segments test precisely followed c (q, a, m). However, sup
brought the tests with worst properties when the data (λ1,......,λm)∈Ʌ∈FT (λ1,......,λm; q) is the asymptotic
has invariant distribution. They provided a relevant critical value. Subsequently, when a1is equal to 1
asymptotic critical value of the multiple breaks and and for m is greater than 1 as am is equal to c (q, a,
the sequential (L+1│L) for trimming cost, which is 1)/c (q, a, m) are the weights then it is denoted by
equal to .05. When k is equal to 1 to 9 and q is equal
to 1 to 10. In this case of no serial correlation in the
errors and different variance for the residuals across
segments (cor_u = 0, het_z =1, het_u = 0) in which The asymptotic equal version is
small trimming is allowed arbitrarily. Mainly, where8
is considered to be the maximum number of a break
then the trimming is equal to .10. Similarly, when
trimming is equivalent to 0.15 then the maximum However, the significance level of WD max FT
number of breaks allowed is 5, when cutting is equal (M, q) have chosen weights themselves based on
to .20 then 3 breaks allowed and when decorating is a, unlike UD max FT (M, q) and based on Bai and
equal to .25, then two breaks permitted. Perron the critical values and the trimming is equal
The above consideration has required that the to .05, .10 and .15 then 5 breaks are allowed. If the
specification of a particular number of breaks (m), trimming is equal to .20, then 3 breaks are allowed,
against the alternative hypothesis. To introduce and the trimming is equal to .25, then the break is 2.
inference, a pre-specified number of breaks was not Another class of test is L versus L+1 breakpoints.
encouraged often by the researchers. To the extent, It is primarily focused on the difference between
Bai and Perron (1998 and 2006) have shown two the sum of squared residuals obtained by L versus
tests of the null hypothesis. That is no structural and L+1. This test applied each observations Ti-1
break against an unknown number of breaks given to T1(i=1,......L+1) and each segment as well. This
some upper bound M. The first is considered double test is not necessary to use global sum of squared
maximum tests to fix an equal weight and the second residuals toTbut it has focused on break fractions
test is considered individuals weight test which is that isλi= TiT, and it has converged actual value of T.
similar and applied marginal p-values across m. Break dates selected based on the overall minimum
Double maximum Test: Dmax FT (M,q,a1,..... amount, that is where the model L+1 shows total
aM) =max1≤m≤Msupλ1…….λm∈Ʌ∈FT*(λ1,…… minimum value of the sum of squared and it is
λm;q)fixed weights is {α1,.....,αm) and then all sufficiently smaller than the model L.
weights equal to unity. The modified version
of UDmax FT (M, q) = max 1≤m≤M sup The Repartition Procedure
(λ1,......,λm)∈Ʌ∈FT (λ1,......,λm; q). M is fixed, the This technique re-estimates each of the
sum of m is F(λ1,......,λm; q)depends on chi-squared breakpoints based on the initial estimates (initial
random variables with q degrees of freedom, each T-consistent estimator kiforki0 (i=1,2) were
one divided by m. obtained). To estimate k10 the subsample [1, k2]
Equal weighted version: UD max F*T (M, q) = max were used and to determine k20the subsample
1≤m≤M sup (λ1,......,λm)∈Ʌ∈F*T (λ1,......,λm; q); [k1, T] were used, the resulting estimators by k1*
Bai and Perron used asymptotical equal version is and k2* respectively. The proximity of ki to k10 and
UD max FT (M, q) = max 1≤m≤MFT (λ1,......,λm; effectively used sample [ki-10+1,ki+10] to estimate
q). Where λj=TjTj=1,……,m obtained estimates the k10(i=1, 2 with k00=1, k30=T (Bai and Perron
global minimization of the sum of squared residuals 1997).
of the breakpoints.
76 http://www.shanlaxjournals.in
Shanlax
International Journal of Economics
77 http://www.shanlaxjournals.in
Shanlax
International Journal of Economics
Some issues are raised about Bai-Perron Tests. Christiano, LJ. “Searching for a Break in GNP.”
There are difficulties in making finite conclusions Journal of Business and Economic Statistics,
about the exact number and position of multiple vol. 10, no. 3, 1992, pp. 237-250.
breaks as they are susceptible to assumptions made Jushan Bai, “Computation and Analysis of Multiple
about the number of breakpoints, size of segments/ Structural Change Models”, Journal of
trimming parameters. Dholakia and Sapre (2011) Applied Econometrics, vol. 18, no. 1, 2003,
found different number and position of breakpoints pp. 1-22.
when assumptions about several breakpoints, size of Bai, J. and Pierre Perron. “Critical Values for
parts, the base year of the output data and the size Multiple Structural Change Tests”, The
of the reference period were changed. Given the Econometrics Journal, vol. 6, no. 1, 2003,
problem of the inexactness of break positions and pp. 72-78.
their number as in the case, one has to test Bai-Perron Dholakia, HR and Sapre, AA. “Estimating Structural
and the Quandt-Andrews breakpoint, even though it Breaks Endogenously in India’s Post-
renders a single breakpoint. Independence Growth Path: An Empirical
Critique.” Journal of Quantitative Economics,
References vol. 9, no. 2, 2011, pp. 73-87.
Andrews, DWK and Ploberger, W. “Optimal Tests Dufour, JM. “Recursive Stability Analysis of Linear
When a Nuisance Parameter is Present Only Regression, Relationships.” Journal of
Under the Alternative.” Econometrica, vol. Econometrics, vol. 19, no. 3, 1982, pp. 31-76.
62, no. 6, 1994, pp. 1383-1414. Jushan Bai, “Estimating and Testing Linear
Andrews, DWK. “Tests for Parameter Instability and Models with Multiple Structural Changes.”
Structural Change with Unknown Change Econometrica, vol. 66, no. 1, 1998, pp. 47-78.
Point.” Econometrica, vol. 61, no. 4, 1993, Jushan Bai, “Estimating Multiple Breaks One at a
pp. 821-856. Time.” Econometric Theory, vol. 13, no. 3,
Bai, Jushan. “Least Squares Estimation of a Shift 1997, pp. 315-352.
in Linear Processes.” Journal of Time Series Jushan Bai, “Estimation of a Change Point in
Analysis, vol. 15, no. 5, 1994, pp. 453-472. Multiple Regression Models”, Review of
Baltagi, B. Econometrics, Springer, New York, Economics and Statistics, vol. 79, no. 4, 1997,
2011. pp. 551-563.
Banerjee, A. et al. “Recursive and Sequential Tests Gujarati, DN. Basic Econometrics, The
of the Unit Root and Trend Break Hypotheses: McGraw-Hill Book Company, 2004.
Theory and International Evidence.” Journal Hansen, B.E. “The New Econometrics of Structural
of Business and Economic Statistics, vol. 10, Change: Dating Breaks in U.S. Labor
no. 3, 1992, pp. 271-287. Productivity.” The Journal of Economic
Brown, R.L, Durbin. J and Evans, JM. “Techniques Perspectives, vol. 15, no. 4, 2001, pp. 117-
for Testing the Constancy of Regression 128.
Relationships Over Time.” Journal of the Kim JH and Siegmund, D. “The Likelihood Ratio
Royal Statistical Society, Series B, vol. 37, Test for a Change-Point in Simple Linear
no. 2, 1975, pp. 149-192 Regression.” Biometrika, vol. 76, no. 3, 1989,
Cantrell, RS, Burrows, PM and Vuong, QH. pp. 409-423.
“Interpretation and Use of Generalised Chow Lo, Andrew and Newey, WK. “A Larger Sample
Test.” International Economic Review, Chow Test for the Linear Simultaneous
vol. 32, no. 3, 1991, pp. 725-741. Equation.” Economics Letters, vol. 18, no. 4,
Chow, Gregory C. “Tests of Equality Between Sets 1985, pp. 351-353.
of Coefficients in Two Linear Regressions.” Maddala, GS, and Kim, IM. Unit Roots Cointegration
Econometrica, vol. 28, no. 3, 1960, pp. 591- and Structural Change, Cambridge University
605. Press, 1998.
78 http://www.shanlaxjournals.in
Shanlax
International Journal of Economics
Perron, P. “The Great Crash, the Oil Price Shock, and Econometrics, Addison-Wesley, New York,
the Unit Root Hypothesis.” Econometrica, Pierre Perron, “Testing for the Unit Root in a Time Series
vol. 57, no. 6, 1989, pp. 1361-1401. with a Changing Mean.” Journal of Business
Perron, P and Vogelsang, TJ. “Testing for a Unit and Economic Statistics, vol. 8, no. 2, 1990,
Root in a Time Series with a Changing Mean: pp. 153-162.
Corrections and Extensions.” Journal of Werner Ploberger and Walter Krämer, “The CUSUM
Business and Economic Statistics, vol. 10, Test with OLS Residuals.” Econometrica,
no. 4, 1992, pp. 467-470. vol. 60, no. 2, 1992, pp. 271-285.
Ploberger, W and Karmer, W. “The Local Power of Pierre Perron and Cosme Vodounou, “The Variance
the CUSUM and CUSUM of Squares Test.” Ratio Test: An Analysis of Size and Power
Econometric Theory, vol. 6, no. 3, 1990, Based on A Continuous-Time Asymptotic
pp. 335-347. Framework.” Econometric Theory, vol. 21,
Ploberger, W. et al. “A New Test for Structural no. 3, 2005, pp. 562-592.
Stability in the Linear Regression Model.” Wallack, Seddon. “Structural Breaks in Indian
Journal of Econometrics, vol. 40, no. 2, 1989, Macroeconomic Data.” Economic and
pp. 307-318. Political Weekly, vol. 38, no. 41, 2003,
Ploberger, W, et al. “A Modification of the CUSUM pp. 4312-4315.
Test in the Linear Regression Model with Zeileis. Achim. et al. “Strucchange: An R Package
Lagged Dependent Variables.” Empirical for Testing for Structural Change in Linear
Economics, vol. 14, no. 2, 1989, pp. 65-75. Regression Models.” Journal of Statistical
Quandt, Richard. “The Estimation of the Parameters Software, vol. 7, no. 2, 2002, pp. 1-38.
of a Linear Regression System Obeying Two Zivot, E. and Andrews. W. K. “Further Evidence on
Separate Regimes.” Journal of American the Great Crash, the Oil-Price Shock, and the
Statistical Association, vol. 53, no. 284, 1958, Unit-Root Hypothesis.” Journal of Business
pp. 873-880. and Economic Statistics, vol. 10, no. 3, 1992,
Stock, HJ and Watson, Mark. Introduction to pp. 251-270.
Author Details
P.Muthuramu, Teaches at the Department of Development Studies, Rajiv Gandhi National Institute of Youth
Development, Chennai, Tamil Nadu, India. Email ID: [email protected].
T.Uma Maheswari, Assistant Professor, Department of Economics, Lady Doak College, Madurai, Tamil Nadu, India
79 http://www.shanlaxjournals.in