Monday, December 26, 2016

Specification Testing With Very Large Samples

I received the following email query a while back:
"It's my understanding that in the event that you have a large sample size (in my case, > 2million obs) many tests for functional form mis-specification will report statistically significant results purely on the basis that the sample size is large. In this situation, how can one reasonably test for misspecification?" 
Well, to begin with, that's absolutely correct - if the sample size is very, very large then almost any null hypothesis will be rejected (at conventional significance levels). For instance, see this earlier post of mine.

Schmueli (2012) also addresses this point from the p-value perspective.

But the question was, what can we do in this situation if we want to test for functional form mis-specification?

Schmueli offers some general suggestions that could be applied to this specific question:
  1. Present effect sizes.
  2. Report confidence intervals.
  3. Use (certain types of) charts
This is followed with an empirical example relating toauction prices for camera sales on eBay, using a sample size of n = 341,136.

To this, I'd add, consider alternative functional forms and use ex post forecast performance and cross-validation to choose a preferred functional form for your model.

You don't always have to use conventional hypothesis testing for this purpose.

Reference

Schmueli, G., 2012. Too big to fail: Large samples and the p-value problem. Mimeo., Institute of Service Science, National Tsing Hua University, Taiwan.


© 2016, David E. Giles

Irving Fisher & Distributed Lags

Some time back, Mike Belongia (U. Mississippi) emailed me as follows: 
"I enjoyed your post on Shirley Almon;  her name was very familiar to those of us of a certain age.
With regard to your planned follow-up post, I thought you might enjoy the attached piece by Irving Fisher who, in 1925, was attempting to associate variations in the price level with the volume of trade.  At the bottom of p. 183, he claims that "So far as I know this is the first attempt to distribute a statistical lag" and then goes on to explain his approach to the question.  Among other things, I'm still struck by the fact that Fisher's "computer" consisted of his intellect and a pencil and paper."
The 1925 paper by Fisher that Mike is referring to can be found here. Here are pages 183 and 184:



Thanks for sharing this interesting bit of econometrics history, Mike. And I haven't forgotten that I promised to prepare a follow-up post on the Almon estimator!

© 2016, David E. Giles