Tuesday, July 19, 2011

Sims and Zha IRF Error Bands

After completing the Monte Carlo error bands, I moved onto integrating Sims and Zha error bands into the statsmodels package.These are based on pages 1127 to 1129 of Chris Sims and Tao Zha's 1999 Econometrica (Vol. 67 No. 5) article, "Error Bands for Impulse Responses."

This method took a long time just to get my head around and a lot of trial and error. While Sims and Zha focus on Bayesian sampling methods from the joint distribution of the coefficients and covariance matrix to generate the draws of the MA(n) representations, the method used here to generate these draws are simpler Monte Carlo simulations. In order for these error bands to truly follow the prescription of Sims and Zha (SZ), the Bayesian sampling methods would need to be employed.

Here's a quick overview of the theory.

Given a covariance matrix Sigma, we can perform eigenvalue decomposition as such:




where Lambda is diagonal and each diagonal element of Lambda corresponds to an eigenvalue of Sigma. Column 'k' of W is the eigenvector corresponding to the 'k'th diagonal element of Lambda or the 'k'th eigenvector.

We will also define our moving average representation or impulse response function as:



where c is the time series vector ('t' to 't+h') response of variable 'i' to a shock in period 't' to variable 'j'.

SZ make the case when the time series model (VAR) is fit to data that is not smooth (differenced, etc.), most of the variation will be contained in the principal components of W. With this information, SZ propose three methods for characterizing the uncertainty around the data.

The following three arrays of graphs can be produced by the following code:

import numpy as np import scikits.statsmodels.api as sm from scikits.statsmodels.tsa.api import VAR import matplotlib.pyplot as plt mdata = sm.datasets.macrodata.load().data mdata = mdata[['realgdp','realcons','realinv']] names = mdata.dtype.names data = mdata.view((float,3)) data = np.diff(np.log(data), axis=0) mymodel = VAR(data,names=names) res = mymodel.fit(maxlags=3,ic=None) res.irf(periods=20).plot(impulse='realgdp', stderr_type='sz1', repl=1000, seed=30) res.irf(periods=20).plot(impulse='realgdp', stderr_type='sz2',repl=1000, seed=30) res.irf(periods=30).plot(impulse='realgdp', stderr_type='sz3', repl=1000, seed=30) plt.show()

At this point, I have only implemented this for the non-orthogalized impulse responses, but Sims and Zha explicitly address this case in their paper and it is analogous to the methods defined below.

1) Symmetric, assumes Gaussian uncertainty. These error bands add and subtract the estimated impulse response functions with error completely defined by the principal component1:



where W_{.k} is the column of W corresponding to the 'k'th eigenvalue of Sigma. The above equation would be the 68% probability bands. 95% would simply be mulitplied by a scalar of 1.96 on both sides.

In our three variable model, this method produces the following error bands a response to a gdp shock:


Looks nice enough. These are defined as implemented to be symmetric, so I wasn't too worried about these.

2) Non-symmetric error bands generated by Monte Carlo draws where only covariance between time but not across variables exists.

In this case, instead of assuming Gaussian uncertainty, we retain the draws used to estimate the initial covariance matrix, Sigma, and for each of the these draws we calculate the vector gamma_k:



where W_{k.} is the 'k'th row of W, and k refers to the largest eigenvalue of Sigma. Using these quantiles of the individual elements of gamma_k across the MC draws, we can generate 68% probability bands as follows:



where the subscripts on gamma_k refer to the 16th and 84th percentile of the gamma draws. In our three variable case, this produces the following graphical representation:


We can see that in this case the uncertainty clearly drops precipitously once we hit a certain t in the time series representation.

Just for completeness, if we look at the response of the same variables to a shock to real consupmtion, we notice that this method does not even guarantee that the probability bands contain the estimated impulse response function:


While this was extremely worrisome at first, Sims and Zha do give some examples where the probability bands do not contain the estimated impulse response function for certain 't'. These error bands are supposed to give the researcher an idea as to the symmetry (or lackthereof) of the posterior distribution of the impulse response functions.

3) Non-symmetric error bands generated by Monte Carlo draws where covariance between time and between variables is identified.

Here, instead of treating each vector of responses individually, we consider each set of impulse response functions as a single system. Sims and Zha note that while in most cases the majority of the covariance will be between intertemporal observations of a single variable, considering inter-variable time series covariance may be valuable in certain situations.

In order to investigate how different c_ij related to each other, we stack each set of impulse response functions that respond to a single shock j and then compute the eigenvector decomposition of this stacked vector for each MC draw. This allows us to compute eigenvectors that contain the variation both across time periods and across variables. We calculate our gamma_k in an analagous manner to the second method and add the appropriate gamma_k quantiles to the estimated response.

It happens that in our system, the cross-variable covariance does not reveal much addition information about a shock to GDP:


It seems as though the variation in real GDP dominates the variation in the other variables.

This can also be seen by examining the response of real GDP to all three shocks:


Altogether these methods are meant to be used to examine the characteristics of the time series representation of the data. The first Sims and Zha method would most likely be the one to publish in a paper, but the other two methods give the researcher a more complete picture of the nature of the posterior distribution.

A few small things still need to be worked out for the Sims and Zha methods to be a complete part of the VAR package. As I mentioned before, they still need to be implemented for orthogonalized IRFs, but this will not be difficult (they are very clear in their paper in moving towards this implementation). Also, it will be important to bring in user choice when it comes to which principal component to use when characterizing the data. For Sims and Zha error bands 1 and 2, the user can pass in a matrix of integers that correspond to the chosen principal components of the variance-covariance matrix.

Scheduling here on out (let me know what you think, updated):

1) I'd like to completely integrate Sims and Zha error bands. Specifically, this means:

a) Component choice for SZ3

b) Orthogonalized error bands

c) Clean up code

I will aim to finish the above tasks by this Saturday (7/23).

2) After this, I would like to move on to structural VAR implementation. This in itself will feed back on the error band methods that I have been working on. SVAR will draw from the ML methods already present in statsmodels. I'd like to finish a SVAR estimation method by the August 3rd.

3) Once SVAR is completely integrated into the package, per Skipper's suggestion, I will be using pure Python to generalize the Kalman filter. I'll have more questions once I reach that point.

I'd really like to move much quicker than I did in the first half of GSOC and hit these goals.

Bart







1. SZ suggest using the largest eigenvalue(s) of Lambda, as they will most likely identify the majority of the variation in this type of data.