Skip to main content

Yusuke Narita Publications

Publish Date
Discussion Paper
Abstract

We obtain a necessary and sufficient condition under which random-coefficient discrete choice models, such as mixed-logit models, are rich enough to approximate any nonparametric random utility models arbitrarily well across choice sets. The condition turns out to be the affine-independence of the set of characteristic vectors. When the condition fails, resulting in some random utility models that cannot be closely approximated, we identify preferences and substitution patterns that are challenging to approximate accurately. We also propose algorithms to quantify the magnitude of approximation errors.

Discussion Paper
Abstract

Algorithms make a growing portion of policy and business decisions. We develop a treatment-effect estimator using algorithmic decisions as instruments for a class of stochastic and deterministic algorithms. Our estimator is consistent and asymptotically normal for well-defined causal effects. A special case of our setup is multidimensional regression discontinuity designs with complex boundaries. We apply our estimator to evaluate the Coronavirus Aid, Relief, and Economic Security Act, which allocated many billions of dollars worth of relief funding to hospitals via an algorithmic rule. The funding is shown to have little effect on COVID-19-related hospital activities. Naive estimates exhibit selection bias.

Discussion Paper
Abstract

What happens if selective colleges change their admission policies? We study this question by analyzing the world’s first implementation of nationally centralized meritocratic admissions in the early twentieth century. We find a persistent meritocracy-equity tradeoff. Compared to the decentralized system, the centralized system admitted more high-achievers and produced more occupational elites (such as top income earners) decades later in the labor market. This gain came at a distributional cost, however. Meritocratic centralization also increased the number of urban-born elites relative to rural-born ones, undermining equal access to higher education and career advancement.

Biometrics
Abstract

Dynamic treatment regimes (DTRs) are sequences of decision rules that recommend treatments based on patients’ time-varying clinical conditions. The sequential, multiple assignment, randomized trial (SMART) is an experimental design that can provide high-quality evidence for constructing optimal DTRs. In a conventional SMART, participants are randomized to available treatments at multiple stages with balanced randomization probabilities. Despite its relative simplicity of implementation and desirable performance in comparing embedded DTRs, the conventional SMART faces inevitable ethical issues, including assigning many participants to the empirically inferior treatment or the treatment they dislike, which might slow down the recruitment procedure and lead to higher attrition rates, ultimately leading to poor internal and external validities of the trial results. In this context, we propose a SMART under the Experiment-as-Market framework (SMART-EXAM), a novel SMART design that holds the potential to improve participants’ welfare by incorporating their preferences and predicted treatment effects into the randomization procedure. We describe the steps of conducting a SMART-EXAM and evaluate its performance compared to the conventional SMART. The results indicate that the SMART-EXAM can improve the welfare of the participants enrolled in the trial, while also achieving a desirable ability to construct an optimal DTR when the experimental parameters are suitably specified. We finally illustrate the practical potential of the SMART-EXAM design using data from a SMART for children with attention-deficit/hyperactivity disorder.

Discussion Paper
Abstract

We obtain a necessary and sufficient condition under which random-coefficient discrete choice models such as the mixed logit models are rich enough to approximate any nonparametric random utility models across choice sets. The condition turns out to be very simple and tractable. When the condition is not satisfied and, hence, there exists a random utility model that cannot be approximated by any random-coefficient discrete choice model, we provide algorithms to measure the approximation errors. After applying our theoretical results and the algorithms to real data, we find that the approximation errors can be large in practice.

Management Science
Abstract

Many centralized school admissions systems use lotteries to ration limited seats at oversubscribed schools. The resulting random assignment is used by empirical researchers to identify the effects of schools on outcomes like test scores. I first find that the two most popular empirical research designs may not successfully extract a random assignment of applicants to schools. When are the research designs able to overcome this problem? I show the following main results for a class of data-generating mechanisms containing those used in practice: The first-choice research design extracts a random assignment under a mechanism if the mechanism is strategy-proof for schools. In contrast, the other qualification instrument research design does not necessarily extract a random assignment under any mechanism. The former research design is therefore more compelling than the latter. Many applications of the two research designs need some implicit assumption, such as large-sample approximately random assignment, to justify their empirical strategy.

Discussion Paper
Abstract

Democracy is widely believed to contribute to economic growth and public health in the 20th and earlier centuries. We find that this conventional wisdom is reversed in this century, i.e., democracy has persistent negative impacts on GDP growth during 2001-2020. This finding emerges from five different instrumental variable strategies. Our analysis suggests that democracies cause slower growth through less investment and trade. For 2020, democracy is also found to cause more deaths from Covid-19.

Discussion Paper
Abstract

Countries with more democratic political regimes experienced greater GDP loss and more deaths from COVID-19 in 2020. Using five diffferent instrumental variable strategies, we find that democracy is a major cause of the wealth and health losses. This impact is global and is not driven by China and the US alone. A key channel for democracy’s negative impact is weaker and narrower containment policies at the beginning of the outbreak, not the speed of introducing policies.

Discussion Paper
Abstract

Democracy is widely believed to contribute to economic growth and public health. However, we find that this conventional wisdom is no longer true and even reversed; democracy has persistent negative impacts on GDP growth since the beginning of this century. This finding emerges from five different instrumental variable strategies. Our analysis suggests that democracies cause slower growth through less investment, less trade, and slower value-added growth in manufacturing and services. For 2020, democracy is also found to cause more deaths from Covid-19.

Discussion Paper
Abstract

Algorithms produce a growing portion of decisions and recommendations both in policy and business. Such algorithmic decisions are natural experiments (conditionally quasirandomly assigned instruments) since the algorithms make decisions based only on observable input variables. We use this observation to develop a treatment-effect estimator for a class of stochastic and deterministic algorithms. Our estimator is shown to be consistent and asymptotically normal for well-defined causal effects. A key special case of our estimator is a high-dimensional regression discontinuity design. The proofs use tools from differential geometry and geometric measure theory, which may be of independent interest. 

 

The practical performance of our method is first demonstrated in a high-dimensional simulation resembling decision-making by machine learning algorithms. Our estimator has smaller mean squared errors compared to alternative estimators. We finally apply our estimator to evaluate the effect of Coronavirus Aid, Relief, and Economic Security (CARES) Act, where more than $10 billion worth of relief funding is allocated to hospitals via an algorithmic rule. The estimates suggest that the relief funding has little effect on COVID- 19-related hospital activity levels. Naive OLS and IV estimates exhibit substantial selection bias.

Proceedings of the National Academy of Sciences
Abstract

Randomized controlled trials (RCTs) enroll hundreds of millions of subjects and involve many human lives. To improve subjects’ welfare, I propose a design of RCTs that I call Experiment-as-Market (EXAM). EXAM produces a welfare-maximizing allocation of treatment-assignment probabilities, is almost incentive-compatible for preference elicitation, and unbiasedly estimates any causal effect estimable with standard RCTs. I quantify these properties by applying EXAM to a water-cleaning experiment in Kenya. In this empirical setting, compared to standard RCTs, EXAM improves subjects’ predicted well-being while reaching similar treatment-effect estimates with similar precision.