Reference#
For more on testing a Linear Factor Pricing Model, see Cochrane Chapter 12.
Technical considerations that we will not get into…#
Cross-sectional#
The statistical significance of the cross-sectional factor premia requires an adjustment for the fact that the cross-sectional regression relies on a first-stage regression estimates that have errors. This “Shanken Correction Factor” increases the standard errors of the factor premia.
One can calculate a total model error from the cross-sectional regression. This requires (the strange) idea of checking whether a regression’s residuals are “too big”. The formula is a bit of a mess. But it gives statistical significance to whether the R-squared is “close enough” to one.
Fama-MacBeth is a route to get better cross-sectional standard errors.
Time-series#
The model test from the time-series regressions looks at the SR of the factors versus the SR of the errors. In finite samples, this is an F-test with extra scaling. Asymptotically this is a chi-squared test with scaling of T.
The model test from the time-series regression is known as the GRS test. It assumes independence of epsilon and factors as well as homoskedasticity in time-series epsilon. It also assumes no serial correlation across the time-series, but that is not such a big deal.
Notation Addendum#
It might seem like the intercept of the cross-sectional regression is the average of the intercepts of the time-series regressions.
However, the cross-sectional regression changes the factor premia in order to shrink these alpha errors.
Thus, the variance of the cross-sectional errors, \(\upsilon\) will be strictly less than the variance of the time-series alphas. Accordingly, the cross-sectional intercept, \(\eta\) may even have a different sign than the mean of the time-series intercepts. Again, this results from the fact that the cross-sectional regression coefficients may be substanatially different (and different signs) from the time-series factor means.