Skip to main content

Ryota Iijima Publications

Publish Date
Discussion Paper
Abstract

We present an approach to analyze learning outcomes in a broad class of misspecified environments, spanning both single-agent and social learning. We introduce a novel “prediction accuracy” order over subjective models, and observe that this makes it possible to partially restore standard martingale convergence arguments that apply under correctly specified learning. Based on this, we derive general conditions to determine when beliefs in a given environment converge to some long-run belief either

Discussion Paper
Abstract

We present an approach to analyze learning outcomes in a broad class of misspecified environments, spanning both single-agent and social learning. Our main results provide general criteria to determine—without the need to explicitly analyze learning dynamics—when beliefs in a given environment converge to some long-run belief either locally or globally (i.e., from some or all initial beliefs). The key ingredient underlying these criteria is a novel “prediction accuracy” ordering over subjective models that refines existing comparisons based on Kullback-Leibler divergence. We show that these criteria can be applied, first, to unify and generalize various convergence results in previously studied settings. Second, they enable us to identify and analyze a natural class of environments, including costly information acquisition and sequential social learning, where unlike most settings the literature has focused on so far, long-run beliefs can fail to be robust to the details of the true data generating process or agents’ perception thereof. In particular, even if agents learn the truth when they are correctly specified, vanishingly small amounts of misspecification can lead to extreme failures of learning.

Discussion Paper
Abstract

We propose a class of multiple-prior representations of preferences under ambiguity, where the belief the decision-maker (DM) uses to evaluate an uncertain prospect is the outcome of a game played by two conflicting forces, Pessimism and Optimism. The model does not restrict the sign of the DM’s ambiguity attitude, and we show that it provides a unified framework through which to characterize different degrees of ambiguity aversion, and to represent the co-existence of negative and positive ambiguity attitudes within individuals as documented in experiments. We prove that our baseline representation, dual-self expected utility (DSEU), yields a novel representation of the class of invariant biseparable preferences (Ghirardato, Maccheroni, and Marinacci, 2004), which drops uncertainty aversion from maxmin expected utility (Gilboa and Schmeidler, 1989). Extensions of DSEU allow for more general departures from independence.

Discussion Paper
Abstract

We propose a class of multiple-prior representations of preferences under ambiguity, where the belief the decision-maker (DM) uses to evaluate an uncertain prospect is the outcome of a game played by two conflicting forces, Pessimism and Optimism. The model does not restrict the sign of the DM’s ambiguity attitude, and we show that it provides a unified framework through which to characterize different degrees of ambiguity aversion, and to represent the co-existence of negative and positive ambiguity attitudes within individuals as documented in experiments. We prove that our baseline representation, dual-self expected utility (DSEU), yields a novel representation of the class of invariant biseparable preferences (Ghirardato, Maccheroni, and Marinacci, 2004), which drops uncertainty aversion from maxmin expected utility (Gilboa and Schmeidler, 1989), while extensions of DSEU allow for more general departures from independence. We also provide foundations for a generalization of prior-by-prior belief updating to our model.

Discussion Paper
Abstract

We propose a class of multiple-prior representations of preferences under ambiguity where the belief the decision-maker (DM) uses to evaluate an uncertain prospect is the outcome of a game played by two conflicting forces, Pessimism and Optimism. The model does not restrict the sign of the DM’s ambiguity attitude, and we show that it provides a unified framework through which to characterize different degrees of ambiguity aversion, as well as to represent context-dependent negative and positive ambiguity attitudes documented in experiments. We prove that our baseline representation, Boolean expected utility (BEU), yields a novel representation of the class of invariant biseparable preferences (Ghirardato, Maccheroni, and Marinacci, 2004), which drops uncertainty aversion from maxmin expected utility (Gilboa and Schmeidler, 1989), while extensions of BEU allow for more general departures from independence.

Discussion Paper
Abstract

We propose a multiple-prior model of preferences under ambiguity that provides a unified lens through which to understand different formalizations of ambiguity aversion, as well as context-dependent negative and positive ambiguity attitudes documented in experiments. This model, Boolean expected utility (BEU), represents the belief the decision-maker uses to evaluate any uncertain prospect as the outcome of a game between two conflicting forces, Pessimism and Optimism. We prove, first, that BEU provides a novel representation of the class of invariant biseparable preferences (Ghirardato, Maccheroni, and Marinacci, 2004). Second, BEU accommodates rich patterns of ambiguity attitudes, which we characterize in terms of the relative power allocated to each force in the game. 

Discussion Paper
Abstract

We study to what extent information aggregation in social learning environments is robust to slight misperceptions of others’ characteristics (e.g., tastes or risk attitudes). We consider a population of agents who obtain information about the state of the world both from initial private signals and by observing a random sample of other agents’ actions over time, where agents’ actions depend not only on their beliefs about the state but also on their idiosyncratic types. When agents are correct about the type distribution in the population, they learn the true state in the long run. By contrast, our first main result shows that even arbitrarily small amounts of misperception can generate extreme breakdowns of information aggregation, wherein the long run all agents incorrectly assign probability 1 to some fixed state of the world, regardless of the true underlying state. This stark discontinuous departure from the correctly specified benchmark motivates independent analysis of information aggregation under misperception.
Our second main result shows that any misperception of the type distribution gives rise to a specific failure of information aggregation where agents’ long-run beliefs and behavior vary only coarsely with the state, and we provide systematic predictions for how the nature of misperception shapes these coarse long-run outcomes. Finally, we show that how sensitive information aggregation is to misperception depends on how rich agents’ payoff-relevant uncertainty is. A design implication is that information aggregation can be improved through interventions aimed at simplifying the agents’ learning environment.

Discussion Paper
Abstract

We exhibit a natural environment, social learning among heterogeneous agents, where even slight misperceptions can have a large negative impact on long-run learning outcomes. We consider a population of agents who obtain information about the state of the world both from initial private signals and by observing a random sample of other agents’ actions over time, where agents’ actions depend not only on their beliefs about the state but also on their idiosyncratic types (e.g., tastes or risk attitudes). When agents are correct about the type distribution in the population, they learn the true state in the long run. By contrast, we show, first, that even arbitrarily small amounts of misperception about the type distribution can generate extreme breakdowns of information aggregation, where in the long run all agents incorrectly assign probability 1 to some fixed state of the world, regardless of the true underlying state.  Second, any misperception of the type distribution leads long-run beliefs and behavior to vary only coarsely with the state, and we provide systematic predictions for how the nature of misperception shapes these coarse long-run outcomes. Third, we show that how fragile information aggregation is against misperception depends on the richness of agents’ payoff-relevant uncertainty; a design implication is that information aggregation can be improved by simplifying agents’ learning environment. The key feature behind our findings is that agents’ belief-updating becomes “decoupled” from the true state over time. We point to other environments where this feature is present and leads to similar fragility results.

Discussion Paper
Abstract

We formulate a model of social interactions and misinferences by agents who neglect assortativity in their society, mistakenly believing that they interact with a representative sample of the population. A key component of our approach is the interplay between this bias and agents’ strategic incentives. We highlight a mechanism through which assortativity neglect, combined with strategic complementarities in agents’ behavior, drives up action dispersion in society (e.g., socioeconomic disparities in education investment). We also show how the combination of assortativity neglect and strategic incentives may help to explain empirically documented misperceptions of income inequality and political attitude polarization.

Discussion Paper
Abstract

We formulate a model of social interactions and misinferences by agents who neglect assortativity in their society, mistakenly believing that they interact with a representative sample of the population. A key component of our approach is the interplay between this bias and agents’ strategic incentives. We highlight a mechanism through which assortativity neglect, combined with strategic complementarities in agents’ behavior, drives up action dispersion in society (e.g., socioeconomic disparities in education investment). We also suggest that the combination of assortativity neglect and strategic incentives may be relevant in understanding empirically documented misperceptions of income inequality and political attitude polarization. 

Abstract

We introduce and characterize a recursive model of dynamic choice that accommodates naivete about present bias. While recursive representations are important for tractable analysis of in nite-horizon problems, the commonly-used Strotz model of time inconsistency presents well-known technical difficulties in extensions to dynamic environments. Our model incorporates costly self-control in the sense of Gul and Pesendorfer (2001) to overcome these hurdles. The important novel condition is an axiom for naivete. We first introduce appropriate definitions of absolute and comparative naivete for a simple two-period model, and explore their implications for the costly self-control model. We then develop suitable extensions of these definitions to in nite-horizon environments. Incorporating the definition of absolute naivete as an axiom, we characterize a recursive representation of naive quasi-hyperbolic discounting with self-control for an individual who is jointly overoptimistic about her present-bias factor and her ability to resist instant gratification. We also study the implications of our proposed comparison of naivete for this recursive representation and uncover new restrictions on the present-bias and self-control parameters that characterize comparative naivete. Finally, we discuss the subtleties that preclude more general notions of naivete, and illuminate the impossibility of a definition that simultaneously accommodates both random choice and costly self-control.

Abstract

We introduce and characterize a recursive model of dynamic choice that accommodates naiveté about present bias. The model incorporates costly self-control in the sense of Gul and Pesendorfer (2001) to overcome the technical hurdles of the Strotz representation. The important novel condition is an axiom for naiveté. We first introduce appropriate definitions of absolute and comparative naiveté for a simple two-period model, and explore their implications for the costly self-control model. We then extend this definition for infinite-horizon environments, and discuss some of the subtleties involved with the extension. Incorporating the definition of absolute naiveté as an axiom, we characterize a recursive representation of naive quasi-hyperbolic discounting with self-control for an individual who is jointly overoptimistic about her present-bias factor and her ability to resist instant gratification. We study the implications of our proposed comparison of naiveté for the parameters of the recursive representation. Finally, we discuss the obstacles that preclude more general notions of naiveté, and illuminate the impossibility of a definition that simultaneously incorporates both random choice and costly self-control.

Abstract

Under dynamic random utility, an agent (or population of agents) solves a dynamic decision problem subject to evolving private information. We analyze the fully general and non-parametric model, axiomatically characterizing the implied dynamic stochastic choice behavior. A key new feature relative to static or i.i.d. versions of the model is that when private information displays serial correlation, choices appear history dependent: different sequences of past choices reflect different private information of the agent, and hence typically lead to different distributions of current choices. Our axiomatization imposes discipline on the form of history dependence that can arise under arbitrary serial correlation. Dynamic stochastic choice data lets us distinguish central models that coincide in static domains, in particular private information in the form of utility shocks vs. learning, and to study inherently dynamic phenomena such as choice persistence. We relate our model to specifications of utility shocks widely used in empirical work, highlighting new modeling tradeoffs in the dynamic discrete choice literature. Finally, we extend our characterization to allow past consumption to directly affect the agent’s utility process, accommodating models of habit formation and experimentation.