Skip to main content

Yuhta Ishii Publications

Publish Date
Journal of Political Economy
Abstract

Which information structures are more effective at eliminating first- and higher-order uncertainty and hence at facilitating efficient play in coordination games? We consider a learning setting where players observe many private signals about the state. First, we characterize multiagent learning efficiency, that is, the rate at which players approximate common knowledge. We find that this coincides with the rate at which first-order uncertainty disappears, as higher-order uncertainty vanishes faster than first-order uncertainty. Second, we show that with enough signal draws, information structures with higher learning efficiency induce higher equilibrium welfare. We highlight information design implications for games in data-rich environments.

Review of Economic Studies
Abstract

We present an approach to analyse learning outcomes in a broad class of misspecified environments, spanning both single-agent and social learning. We introduce a novel “prediction accuracy” order over subjective models and observe that this makes it possible to partially restore standard martingale convergence arguments that apply under correctly specified learning. Based on this, we derive general conditions to determine when beliefs in a given environment converge to some long-run belief either locally or globally (i.e. from some or all initial beliefs). We show that these conditions can be applied, first, to unify and generalize various convergence results in previously studied settings. Second, they enable us to analyse environments where learning is “slow”, such as costly information acquisition and sequential social learning. In such environments, we illustrate that even if agents learn the truth when they are correctly specified, vanishingly small amounts of misspecification can generate extreme failures of learning.

Review of Economic Studies
Abstract

We present an approach to analyse learning outcomes in a broad class of misspecified environments, spanning both single-agent and social learning. We introduce a novel “prediction accuracy” order over subjective models and observe that this makes it possible to partially restore standard martingale convergence arguments that apply under correctly specified learning. Based on this, we derive general conditions to determine when beliefs in a given environment converge to some long-run belief either locally or globally (i.e. from some or all initial beliefs). We show that these conditions can be applied, first, to unify and generalize various convergence results in previously studied settings. Second, they enable us to analyse environments where learning is “slow”, such as costly information acquisition and sequential social learning. In such environments, we illustrate that even if agents learn the truth when they are correctly specified, vanishingly small amounts of misspecification can generate extreme failures of learning.

American Economic Review
Abstract

We formulate a model of social interactions and misinferences by agents who neglect assortativity in their society, mistakenly believing that they interact with a representative sample of the population. A key component of our approach is the interplay between this bias and agents' strategic incentives. We highlight a mechanism through which assortativity neglect, combined with strategic complementarities in agents' behavior, drives up action dispersion in society (e.g., socioeconomic disparities in education investment). We also suggest that the combination of assortativity neglect and strategic incentives may be relevant in understanding empirically documented misperceptions of income inequality and political attitude polarization.

Discussion Paper
Abstract

We study which multi-agent information structures are more effective at eliminating both first-order and higher-order uncertainty, and hence at facilitating efficient play in incomplete-information coordination games. We consider a learning setting à la Cripps, Ely, Mailath, and Samuelson (2008) where players have access to many private signal draws from an information structure. First, we characterize the rate at which players achieve approximate common knowledge of the state, based on a simple learning efficiency index. Notably, this coincides with the rate at which players’ first-order uncertainty vanishes, as higher-order uncertainty becomes negligible relative to first-order uncertainty after enough signal draws. Based on this, we show that information structures with higher learning efficiency induce more efficient equilibrium outcomes in coordination games that are played after sufficiently many signal draws. We highlight some robust

Discussion Paper
Abstract

We study settings in which, prior to playing an incomplete information game, players observe many draws of private signals about the state from some information structure. Signals are i.i.d. across draws, but may display arbitrary correlation across players. For each information structure, we define a simple learning efficiency index, which only considers the statistical distance between the worst-informed player’s marginal signal distributions in different states. We show, first, that this index characterizes the speed of common learning (Cripps, Ely, Mailath, and Samuelson, 2008): In particular, the speed at which players achieve approximate common knowledge of the state coincides with the slowest player’s speed of individual learning, and does not depend on the correlation across players’ signals. Second, we build on this characterization to provide a ranking over information structures: We show that, with sufficiently many signal draws, information structures with a higher learning efficiency index lead to better equilibrium outcomes, robustly for a rich class of games and objective functions that are “aligned at certainty.” We discuss implications of our results for constrained information design in games and for the question when information structures are complements vs. substitutes.

Discussion Paper
Abstract

We study settings in which, prior to playing an incomplete information game, players observe many draws of private signals about the state from some information structure. Signals are i.i.d. across draws, but may display arbitrary correlation across players. For each information structure, we define a simple learning efficiency index, which only considers the statistical distance between the worst-informed player’s marginal signal distributions in different states. We show, first, that this index characterizes the speed of common learning (Cripps, Ely, Mailath, and Samuelson, 2008): In particular, the speed at which players achieve approximate common knowledge of the state coincides with the slowest player’s speed of individual learning, and does not depend on the correlation across players’ signals. Second, we build on this characterization to provide a ranking over information structures: We show that, with sufficiently many signal draws, information structures with a higher learning efficiency index lead to better equilibrium outcomes, robustly for a rich class of games and objective functions. We discuss implications of our results for constrained information design in games and for the question when information structures are complements vs. substitutes.

Discussion Paper
Abstract

We study robust welfare comparisons of learning biases, i.e., deviations from correct Bayesian updating. Given a true signal distribution, we deem one bias more harmful than another if it yields lower objective expected payoffs in all decision problems. We characterize this ranking in static (one signal) and dynamic (many signals) settings. While the static characterization compares posteriors signal-by-signal, the dynamic characterization employs an “efficiency index” quantifying the speed of belief convergence. Our results yield welfare-founded quantifications of the severity of well-documented biases. Moreover, the static and dynamic rankings can disagree, and “smaller” biases can be worse in dynamic settings.

Discussion Paper
Abstract

We study robust welfare comparisons of learning biases, i.e., deviations from correct Bayesian updating. Given a true signal distribution, we deem one bias more harmful than another if it yields lower objective expected payoffs in all decision problems. We characterize this ranking in static (one signal) and dynamic (many signals) settings. While the static characterization compares posteriors signal-by-signal, the dynamic characterization employs an “efficiency index” quantifying the speed of belief convergence. Our results yield welfare-founded quantifications of the severity of well-documented biases. Moreover, the static and dynamic rankings can conflict, and “smaller” biases can be worse in dynamic settings. 

Discussion Paper
Abstract

We present an approach to analyze learning outcomes in a broad class of misspecified environments, spanning both single-agent and social learning. We introduce a novel “prediction accuracy” order over subjective models, and observe that this makes it possible to partially restore standard martingale convergence arguments that apply under correctly specified learning. Based on this, we derive general conditions to determine when beliefs in a given environment converge to some long-run belief either locally or globally (i.e., from some or all initial beliefs). We show that these conditions can be applied, first, to unify and generalize various convergence results in previously studied settings. Second, they enable us to analyze environments where learning is “slow,” such as costly information acquisition and sequential social learning. In such environments, we illustrate that even if agents learn the truth when they are correctly specified, vanishingly small amounts of misspecification can generate extreme failures of learning.

Discussion Paper
Abstract

We present an approach to analyze learning outcomes in a broad class of misspecified environments, spanning both single-agent and social learning. We introduce a novel “prediction accuracy” order over subjective models, and observe that this makes it possible to partially restore standard martingale convergence arguments that apply under correctly specified learning. Based on this, we derive general conditions to determine when beliefs in a given environment converge to some long-run belief either locally or globally (i.e., from some or all initial beliefs). We show that these conditions can be applied, first, to unify and generalize various convergence results in previously studied settings. Second, they enable us to analyze environments where learning is “slow,” such as costly information acquisition and sequential social learning. In such environments, we illustrate that even if agents learn the truth when they are correctly specified, vanishingly small amounts of misspecification can generate extreme failures of learning.

Discussion Paper
Abstract

We present an approach to analyze learning outcomes in a broad class of misspecified environments, spanning both single-agent and social learning. We introduce a novel “prediction accuracy” order over subjective models, and observe that this makes it possible to partially restore standard martingale convergence arguments that apply under correctly specified learning. Based on this, we derive general conditions to determine when beliefs in a given environment converge to some long-run belief either

Discussion Paper
Abstract

We present an approach to analyze learning outcomes in a broad class of misspecified environments, spanning both single-agent and social learning. Our main results provide general criteria to determine—without the need to explicitly analyze learning dynamics—when beliefs in a given environment converge to some long-run belief either locally or globally (i.e., from some or all initial beliefs). The key ingredient underlying these criteria is a novel “prediction accuracy” ordering over subjective models that refines existing comparisons based on Kullback-Leibler divergence. We show that these criteria can be applied, first, to unify and generalize various convergence results in previously studied settings. Second, they enable us to identify and analyze a natural class of environments, including costly information acquisition and sequential social learning, where unlike most settings the literature has focused on so far, long-run beliefs can fail to be robust to the details of the true data generating process or agents’ perception thereof. In particular, even if agents learn the truth when they are correctly specified, vanishingly small amounts of misspecification can lead to extreme failures of learning.

Discussion Paper
Abstract

We study to what extent information aggregation in social learning environments is robust to slight misperceptions of others’ characteristics (e.g., tastes or risk attitudes). We consider a population of agents who obtain information about the state of the world both from initial private signals and by observing a random sample of other agents’ actions over time, where agents’ actions depend not only on their beliefs about the state but also on their idiosyncratic types. When agents are correct about the type distribution in the population, they learn the true state in the long run. By contrast, our first main result shows that even arbitrarily small amounts of misperception can generate extreme breakdowns of information aggregation, wherein the long run all agents incorrectly assign probability 1 to some fixed state of the world, regardless of the true underlying state. This stark discontinuous departure from the correctly specified benchmark motivates independent analysis of information aggregation under misperception.
Our second main result shows that any misperception of the type distribution gives rise to a specific failure of information aggregation where agents’ long-run beliefs and behavior vary only coarsely with the state, and we provide systematic predictions for how the nature of misperception shapes these coarse long-run outcomes. Finally, we show that how sensitive information aggregation is to misperception depends on how rich agents’ payoff-relevant uncertainty is. A design implication is that information aggregation can be improved through interventions aimed at simplifying the agents’ learning environment.

Discussion Paper
Abstract

We exhibit a natural environment, social learning among heterogeneous agents, where even slight misperceptions can have a large negative impact on long-run learning outcomes. We consider a population of agents who obtain information about the state of the world both from initial private signals and by observing a random sample of other agents’ actions over time, where agents’ actions depend not only on their beliefs about the state but also on their idiosyncratic types (e.g., tastes or risk attitudes). When agents are correct about the type distribution in the population, they learn the true state in the long run. By contrast, we show, first, that even arbitrarily small amounts of misperception about the type distribution can generate extreme breakdowns of information aggregation, where in the long run all agents incorrectly assign probability 1 to some fixed state of the world, regardless of the true underlying state.  Second, any misperception of the type distribution leads long-run beliefs and behavior to vary only coarsely with the state, and we provide systematic predictions for how the nature of misperception shapes these coarse long-run outcomes. Third, we show that how fragile information aggregation is against misperception depends on the richness of agents’ payoff-relevant uncertainty; a design implication is that information aggregation can be improved by simplifying agents’ learning environment. The key feature behind our findings is that agents’ belief-updating becomes “decoupled” from the true state over time. We point to other environments where this feature is present and leads to similar fragility results.