Skip to main content

Drew Fudenberg Publications

Publish Date
Theoretical Economics

We show that Bayesian posteriors concentrate on the outcome distributions that approximately minimize the Kullback–Leibler divergence from the empirical distribution, uniformly over sample paths, even when the prior does not have full support. This generalizes Diaconis and Freedman's (1990) uniform convergence result to, e.g., priors that have finite support, are constrained by independence assumptions, or have a parametric form that cannot match some probability distributions. The concentration result lets us provide a rate of convergence for Berk's (1966) result on the limiting behavior of posterior beliefs when the prior is misspecified. We provide a bound on approximation errors in “anticipated-utility” models, and extend our analysis to outcomes that are perceived to follow a Markov process.

Proceedings of the National Academy of Sciences

The drift-diffusion model (DDM) is a model of sequential sampling with diffusion signals, where the decision maker accumulates evidence until the process hits either an upper or lower stopping boundary and then stops and chooses the alternative that corresponds to that boundary. In perceptual tasks, the drift of the process is related to which choice is objectively correct, whereas in consumption tasks, the drift is related to the relative appeal of the alternatives. The simplest version of the DDM assumes that the stopping boundaries are constant over time. More recently, a number of papers have used nonconstant boundaries to better fit the data. This paper provides a statistical test for DDMs with general, nonconstant boundaries. As a by-product, we show that the drift and the boundary are uniquely identified. We use our condition to nonparametrically estimate the drift and the boundary and construct a test statistic based on finite samples.