Time Series Analysis
Time Series Analysis
Models for Nonstationarity and Noninvertibility
We deal with linear time series models on which stationarity or invertibility is not imposed. Using simple examples arising from estimation and testing problems, we indicate nonstandard aspects of the departure from stationarity or invertibility. In particular, asymptotic distributions of various statistics are derived by the eigenvalue approach under the normality assumption on the underlying processes. As a prelude to discussions in later chapters, we also present equivalent expressions for limiting random variables based on the other two approaches, which I call the stochastic process approach and the Fredholm approach.
1.1 Statistics from the One-Dimensional Random Walk
Let us consider the following simple nonstationary model:
where are independent and identically distributed with common mean 0 and variance 1, which is abbreviated as i.i.d.. The model ( 1.1 ) is usually referred to as the random walk . It is also called the unit root process in the econometrics literature.
Let us deal with the following two statistics arising from the model ( 1.1 ):
where . Each second moment statistic has a normalizer T 2, which is different from the stationary case, and is necessary to discuss the limiting distribution as T . In fact, noting that , we have
It holds [Fuller (1996, p. 220)] that
where means that, for every 0, there exists a positive number T such that for all T . It is anticipated that and have different nondegenerate limiting distributions.
We now attempt to derive the limiting distributions of and . There are three approaches for this purpose, which I call the eigenvalue approach , the stochastic process approach , and the Fredholm approach . The first approach is described here in detail, whereas the second and third are only briefly described and the details are discussed in later chapters.
1.1.1 Eigenvalue Approach
The eigenvalue approach requires a distributional assumption on . We assume that are independent and identically normally distributed with common mean 0 and variance 1, which is abbreviated as NID.
We also need to compute the eigenvalues of the matrices appearing in quadratic forms. To see this the observation vector may be expressed as
where the matrix C and its inverse are given by
The matrix C may be called the random walk generating matrix and play an important role in subsequent discussions.
We can now rewrite and as
Let us compute the eigenvalues and eigenvectors of and . The eigenvalues of were obtained by Rutherford (1946) (see also Problem 1.1 in this chapter) by computing those of
The j th largest eigenvalue j of is found to be
There exists an orthogonal matrix P such that , where the k th column of P is an eigenvector corresponding to k . It can be shown [Dickey and Fuller (1979)] that the th component of P is given by
On the other hand, is evidently singular because the vector e is the first column of C and so that the first column of is a zero vector. In fact, it holds that
where the matrix G is given by
Here C and are the last and submatrices of C and e , respectively, whereas . The eigenvalues of
can be easily obtained (Prob