Skip to main content

Yuichi Kitamura Publications

Publish Date
Abstract

This paper develops new tools for the analysis of Random Utility Models (RUM). The leading application is stochastic revealed preference theory, that is, the modeling of aggregate choice behavior in a population characterized by individual rationality and unobserved heterogeneity. We test the null hypothesis that a repeated cross-section of demand data was generated by such a population, without restricting unobserved heterogeneity in any form whatsoever. Equivalently, we empirically test McFadden and Richter’s (1991) Axiom of Revealed Stochastic Preference (ARSP, to be defined later), using only nonsatiation and the Strong Axiom of Revealed Preference (SARP) as restrictions on individual level behavior. Doing this is computationally challenging. We provide various algorithms that can be implemented with reasonable computational resources. Also, new tools for statistical inference for inequality restrictions are introduced in order to deal with the high-dimensionality and non-regularity of the problem at hand.

Abstract

In this paper we make two contributions. First, we show by example that empirical likelihood and other commonly used tests for parametric moment restrictions, including the GMM-based J-test of Hansen (1982), are unable to control the rate at which the probability of a Type I error tends to zero. From this it follows that, for the optimality claim for empirical likelihood in Kitamura (2001) to hold, additional assumptions and qualifications need to be introduced. The example also reveals that empirical and parametric likelihood may have non-negligible differences for the types of properties we consider, even in models in which they are first-order asymptotically equivalent. Second, under stronger assumptions than those in Kitamura (2001), we establish the following optimality result: (i) empirical likelihood controls the rate at which the probability of a Type I error tends to zero and (ii) among all procedures for which the probability of a Type I error tends to zero at least as fast, empirical likelihood maximizes the rate at which probability of a Type II error tends to zero for “most” alternatives. This result further implies that empirical likelihood maximizes the rate at which probability of a Type II error tends to zero for all alternatives among a class of tests that satisfy a weaker criterion for their Type I error probabilities.

Abstract

This paper is concerned with robust estimation under moment restrictions. A moment restriction model is semiparametric and distribution-free, therefore it imposes mild assumptions. Yet it is reasonable to expect that the probability law of observations may have some deviations from the ideal distribution being modeled, due to various factors such as measurement errors. It is then sensible to seek an estimation procedure that are robust against slight perturbation in the probability measure that generates observations. This paper considers local deviations within shrinking topological neighborhoods to develop its large sample theory, so that both bias and variance matter asymptotically. The main result shows that there exists a computationally convenient estimator that achieves optimal minimax robust properties. It is semiparametrically efficient when the model assumption holds, and at the same time it enjoys desirable robust properties when it does not.

Abstract

Recent developments in empirical likelihood (EL) methods are reviewed. First, to put the method in perspective, two interpretations of empirical likelihood are presented, one as a nonparametric maximum likelihood estimation method (NPMLE) and the other as a generalized minimum contrast estimator (GMC). The latter interpretation provides a clear connection between EL, GMM, GEL and other related estimators. Second, EL is shown to have various advantages over other methods. The theory of large deviations demonstrates that EL emerges naturally in achieving asymptotic optimality both for estimation and testing. Interestingly, higher order asymptotic analysis also suggests that EL is generally a preferred method. Third, extensions of EL are discussed in various settings, including estimation of conditional moment restriction models, nonparametric specification testing and time series models. Finally, practical issues in applying EL to real data, such as computational algorithms for EL, are discussed. Numerical examples to illustrate the efficacy of the method are presented.

Abstract

Recent developments in empirical likelihood (EL) methods are reviewed. First, to put the method in perspective, two interpretations of empirical likelihood are presented, one as a nonparametric maximum likelihood estimation method (NPMLE) and the other as a generalized minimum contrast estimator (GMC). The latter interpretation provides a clear connection between EL, GMM, GEL and other related estimators. Second, EL is shown to have various advantages over other methods. The theory of large deviations demonstrates that EL emerges naturally in achieving asymptotic optimality both for estimation and testing. Interestingly, higher order asymptotic analysis also suggests that EL is generally a preferred method. Third, extensions of EL are discussed in various settings, including estimation of conditional moment restriction models, nonparametric specification testing and time series models. Finally, practical issues in applying EL to real data, such as computational algorithms for EL, are discussed. Numerical examples to illustrate the efficacy of the method are presented.

Keywords: Convex analysis, Empirical distribution, GNP-optimality, Large deviation principle, Moment restriction models, Nonparametric test, NPMLE, Semiparametric efficiency, Weak dependence

JEL Classification: C14

Abstract

This paper develops a general theory of instrumental variables (IV) estimation that allows for both I(1) and I(0) regressors and instruments. The estimation techniques involve an extension of the fully modified (FM) regression procedure that was introduced in earlier work by Phillips-Hansen (1990). FM versions of the generalized instrumental variable estimation (GIVE) method and the generalized method of moments (GMM) estimator are developed. In models with both stationary and nonstationary components, the FM-GIVE and FM-GMM techniques provide efficiency gains over FM-IV in the estimation of the stationary components of a model that has both stationary and nonstationary regressors. The paper exploits a result of Phillips (1991a) that we can apply FM techniques in models with cointegrated regressors and even in stationary regression models without losing the method’s good asymptotic properties. The present paper shows how to take advantage jointly of the good asymptotic properties of FM estimators with respect to the nonstationary elements of a model and the good asymptotic properties of the GIVE and GMM estimators with respect to the stationary components. The theory applies even when there is no prior knowledge of the number of unit roots in the system or the dimension or the location of the cointegration space. An FM extension of the Sargan (1958) test for the validity of the instruments is proposed.