Heckman correction
The Heckman correction is a statistical technique to correct bias from non-randomly selected samples or otherwise incidentally truncated dependent variables, a pervasive issue in quantitative social sciences when using observational data.[1] Conceptually, this is achieved by explicitly modelling the individual sampling probability of each observation (the so-called selection equation) together with the conditional expectation of the dependent variable (the so-called outcome equation). The resulting likelihood function is mathematically similar to the tobit model for censored dependent variables, a connection first drawn by James Heckman in 1974.[2] Heckman also developed a two-step control function approach to estimate this model,[3] which avoids the computational burden of having to estimate both equations jointly, albeit at the cost of inefficiency.[4] Heckman received the Nobel Memorial Prize in Economic Sciences in 2000 for his work in this field.[5]
Method
[edit]Statistical analyses based on non-randomly selected samples can lead to erroneous conclusions. The Heckman correction, a two-step statistical approach, offers a means of correcting for non-randomly selected samples.
Heckman discussed bias from using nonrandom selected samples to estimate behavioral relationships as a specification error. He suggests a two-stage estimation method to correct the bias. The correction uses a control function idea and is easy to implement. Heckman's correction involves a normality assumption, provides a test for sample selection bias and formula for bias corrected model.
Suppose that a researcher wants to estimate the determinants of wage offers, but has access to wage observations for only those who work. Since people who work are selected non-randomly from the population, estimating the determinants of wages from the subpopulation who work may introduce bias. The Heckman correction takes place in two stages.
In the first stage, the researcher formulates a model, based on economic theory, for the probability of working. The canonical specification for this relationship is a probit regression of the form
where D indicates employment (D = 1 if the respondent is employed and D = 0 otherwise), Z is a vector of explanatory variables, is a vector of unknown parameters, and Φ is the cumulative distribution function of the standard normal distribution. Estimation of the model yields results that can be used to predict this employment probability for each individual.
In the second stage, the researcher corrects for self-selection by incorporating a transformation of these predicted individual probabilities as an additional explanatory variable. The wage equation may be specified,
where denotes an underlying wage offer, which is not observed if the respondent does not work. The conditional expectation of wages given the person works is then
Under the assumption that the error terms are jointly normal, we have
where ρ is the correlation between unobserved determinants of propensity to work and unobserved determinants of wage offers u, σ u is the standard deviation of , and is the inverse Mills ratio evaluated at . This equation demonstrates Heckman's insight that sample selection can be viewed as a form of omitted-variables bias, as conditional on both X and on it is as if the sample is randomly selected. The wage equation can be estimated by replacing with Probit estimates from the first stage, constructing the term, and including it as an additional explanatory variable in linear regression estimation of the wage equation. Since , the coefficient on can only be zero if , so testing the null that the coefficient on is zero is equivalent to testing for sample selectivity.
Heckman's achievements have generated a large number of empirical applications in economics as well as in other social sciences. The original method has subsequently been generalized, by Heckman and by others.[6]
Statistical inference
[edit]The Heckman correction is a two-step M-estimator where the covariance matrix generated by OLS estimation of the second stage is inconsistent.[7] Correct standard errors and other statistics can be generated from an asymptotic approximation or by resampling, such as through a bootstrap.[8]
Disadvantages
[edit]- The two-step estimator discussed above is a limited information maximum likelihood (LIML) estimator. In asymptotic theory and in finite samples as demonstrated by Monte Carlo simulations, the full information (FIML) estimator exhibits better statistical properties. However, the FIML estimator is more computationally difficult to implement.[9]
- The canonical model assumes the errors are jointly normal. If that assumption fails, the estimator is generally inconsistent and can provide misleading inference in small samples.[10] Semiparametric and other robust alternatives can be used in such cases.[11]
- The model obtains formal identification from the normality assumption when the same covariates appear in the selection equation and the equation of interest, but identification will be tenuous unless there are many observations in the tails where there is substantial nonlinearity in the inverse Mills ratio. Generally, an exclusion restriction is required to generate credible estimates: there must be at least one variable which appears with a non-zero coefficient in the selection equation but does not appear in the equation of interest, essentially an instrument. If no such variable is available, it may be difficult to correct for sampling selectivity.[9] The reason for this is two-fold: Without an instrument, identification relies on the functional form assumption that is typically considered very weak.[12] Furthermore, even if the assumption holds, the chosen function can be very close to a linear functional form in the area under investigation, causing a multicollinearity problem in the second stage.
Implementations in statistics packages
[edit]- R: Heckman-type procedures are available as part of the
sampleSelection
package.[13][14] - Stata: the command
heckman
provides the Heckman selection model.[15][16]
See also
[edit]- Propensity score matching – Statistical matching technique
- Roy model – Model for self-selection in economics
References
[edit]- ^ Winship, Christopher; Mare, Robert D. (1992). "Models for Sample Selection Bias". Annual Review of Sociology. 18: 327–350. doi:10.1146/annurev.so.18.080192.001551.
- ^ Heckman, James (1974). "Shadow Prices, Market Wages, and Labor Supply". Econometrica. 42 (4): 679–694. doi:10.2307/1913937. JSTOR 1913937.
- ^ Heckman, James (1976). "The Common Structure of Statistical Models of Truncation, Sample Selection and Limited Dependent Variables and a Simple Estimator for Such Models". Annals of Economic and Social Measurement. 5 (4): 475–492.
- ^ Nawata, Kazumitsu (1994). "Estimation of Sample Selection Bias Models by the Maximum Likelihood Estimator and Heckman's Two-Step Estimator". Economics Letters. 45 (1): 33–40. doi:10.1016/0165-1765(94)90053-1.
- ^ Uchitelle, Louis (October 12, 2000). "2 Americans Win the Nobel For Economics". New York Times.
- ^ Lee, Lung-Fei (2001). "Self-selection". In Baltagi, B. (ed.). A Companion to Theoretical Econometrics. Oxford: Blackwell. pp. 383–409. doi:10.1002/9780470996249.ch19. ISBN 9780470996249.
- ^ Amemiya, Takeshi (1985). Advanced Econometrics. Cambridge: Harvard University Press. pp. 368–372. ISBN 0-674-00560-0.
- ^ Cameron, A. Colin; Trivedi, Pravin K. (2005). "Sequential Two-Step m-Estimation". Microeconometrics: Methods and Applications. New York: Cambridge University Press. pp. 200–202. ISBN 0-521-84805-9.
- ^ a b Puhani, P. (2000). "The Heckman Correction for sample selection and its critique". Journal of Economic Surveys. 14 (1): 53–68. doi:10.1111/1467-6419.00104.
- ^ Goldberger, A. (1983). "Abnormal Selection Bias". In Karlin, Samuel; Amemiya, Takeshi; Goodman, Leo (eds.). Studies in Econometrics, Time Series, and Multivariate Statistics. New York: Academic Press. pp. 67–84. ISBN 0-12-398750-4.
- ^ Newey, Whitney; Powell, J.; Walker, James R. (1990). "Semiparametric Estimation of Selection Models: Some Empirical Results". American Economic Review. 80 (2): 324–28. JSTOR 2006593.
- ^ Lewbel, Arthur (2019-12-01). "The Identification Zoo: Meanings of Identification in Econometrics". Journal of Economic Literature. 57 (4): 835–903. doi:10.1257/jel.20181361. ISSN 0022-0515.
- ^ Toomet, O.; Henningsen, A. (2008). "Sample Selection Models in R: Package sampleSelection". Journal of Statistical Software. 27 (7): 1–23. doi:10.18637/jss.v027.i07.
- ^ "sampleSelection: Sample Selection Models". R Project. 3 May 2019.
- ^ "heckman — Heckman selection model" (PDF). Stata Manual.
- ^ Cameron, A. Colin; Trivedi, Pravin K. (2010). Microeconometrics Using Stata (Revised ed.). College Station: Stata Press. pp. 556–562. ISBN 978-1-59718-073-3.
Further reading
[edit]- Achen, Christopher H. (1986). "Estimating Treatment Effects in Quasi-Experiments : The Case of Censored Data". The Statistical Analysis of Quasi-Experiments. Berkeley: University of California Press. pp. 97–137. ISBN 0-520-04723-0.
- Breen, Richard (1996). Regression Models : Censored, Sample Selected, or Truncated Data. Thousand Oaks: Sage. pp. 33–48. ISBN 0-8039-5710-6.
- Fu, Vincent Kang; Winship, Christopher; Mare, Robert D. (2004). "Sample Selection Bias Models". In Hardy, Melissa; Bryman, Alan (eds.). Handbook of Data Analysis. London: Sage. pp. 409–430. doi:10.4135/9781848608184.n18. ISBN 0-7619-6652-8.
- Greene, William H. (2012). "Incidental Truncation and Sample Selection". Econometric Analysis (Seventh ed.). Boston: Pearson. pp. 912–27. ISBN 978-0-273-75356-8.
- Vella, Francis (1998). "Estimating Models with Sample Selection Bias: A Survey". Journal of Human Resources. 33 (1): 127–169. doi:10.2307/146317. JSTOR 146317.