Algorithms produce a growing portion of decisions and recommendations both in policy and business. Such algorithmic decisions are natural experiments (conditionally quasi-randomly assigned instruments) since the algorithms make decisions based only on observable input variables. We use this observation to develop a treatment-effect estimator for a class of stochastic and deterministic decision-making algorithms. Our estimator is shown to be consistent and asymptotically normal for well-defined causal effects. A key special case of our estimator is a multidimensional regression discontinuity design. We apply our estimator to evaluate the effect of the Coronavirus Aid, Relief, and Economic Security (CARES) Act, where hundreds of billions of dollars worth of relief funding is allocated to hospitals via an algorithmic rule. Our estimates suggest that the relief funding has little effect on COVID-19-related hospital activity levels. Naive OLS and IV estimates exhibit substantial selection bias. We apply the method to real-world datasets created by machine-learning algorithms.
Requisite Skills and Qualifications:
I am looking for an RA to help with either method or empirical side of this project.
An ideal candidate is somebody who has done coursework and has strong interests in econometrics/statistics (especially causal inference) and machine learning (especially bandit and reinforcement learning). Hands-on experience in machine learning algorithms is a big plus. Please attach a transcript with your application. Including a writing sample would be a plus though not required.