Reclassification risk is a major concern in health insurance where contracts are typically 1 year in length but health shocks often persist for much longer. While most health systems with private insurers pair short-run contracts with substantial pricing regulations to reduce reclassification risk, long-term contracts with one-sided insurer commitment have significant potential to reduce reclassification risk without the negative side effects of price regulation, such as adverse selection. We theoretically characterize optimal long-term insurance contracts with one-sided commitment, extending the literature in directions necessary for studying health insurance markets. We leverage this characterization to provide a simple algorithm for computing optimal contracts from primitives. We estimate key market fundamentals using data on all under-65 privately insured consumers in Utah. We find that dynamic contracts are very effective at reducing reclassification risk for consumers who arrive at the market in good health, but they are ineffective for consumers who come to the market in bad health, demonstrating that there is a role for the government insurance of pre-market health risks. Individuals with steeply rising income profiles find front-loading costly, and thus relatively prefer ACA-type exchanges. Switching costs enhance, while myopia moderately compromises, the performance of dynamic contracts.
We document that sales of individual products decline steadily throughout most of the product life cycle. Products quickly become obsolete as they face competition from newer products sold by competing firms and the same firm. We build a dynamic model that highlights an innovation-obsolescence cycle, where firms need to introduce new products to grow; otherwise, their portfolios become obsolete as rivals introduce their own new products. By introducing new products, however, firms accelerate the decline of their own existing products, further depressing their sales. This mechanism has sizable implications for quantifying economic growth and the impact of innovation policies.
Considerable evidence in past research shows size distortion in standard tests for zero autocorrelation or zero cross-correlation when time series are not independent identically distributed random variables, pointing to the need for more robust procedures. Recent tests for serial correlation and cross-correlation in Dalla, Giraitis, and Phillips (2022) provide a more robust approach, allowing for heteroskedasticity and dependence in uncorrelated data under restrictions that require a smooth, slowly-evolving deterministic heteroskedasticity process. The present work removes those restrictions and validates the robust testing methodology for a wider class of innovations and regression residuals allowing for heteroscedastic uncorrelated and non-stationary data settings. The updated analysis given here enables more extensive use of the methodology in practical applications. Monte Carlo experiments confirm excellent finite sample performance of the robust test procedures even for extremely complex white noise processes. The empirical examples show that use of robust testing methods can materially reduce spurious evidence of correlations found by standard testing procedures.
The failure of Silicon Valley Bank on March 10, 2023 brought attention to significant weaknesses across the banking system, leading to a panic that spread to other vulnerable banks. With subsequent failures of Signature Bank and First Republic Bank, the United States had three of the four largest bank failures in its history occur over a two-month period. Several features of the Silicon Valley Bank failure make it an ideal teaching case for explaining the underlying economics of banking (in general) and banking crises (specifically). This paper tries to do that.
More than two million U.S. households have an eviction case filed against them each year. Policymakers at the federal, state, and local levels are increasingly pursuing policies to reduce the number of evictions, citing harm to tenants and high public expenditures related to homelessness. We study the consequences of eviction for tenants using newly linked administrative data from two major urban areas: Cook County (which includes Chicago) and New York City. We document that prior to housing court, tenants experience declines in earnings and employment and increases in financial distress and hospital visits. These pre-trends pose a challenge for disentangling correlation and causation. To address this problem, we use an instrumental variables approach based on cases randomly assigned to judges of varying leniency. We find that an eviction order increases homelessness and hospital visits and reduces earnings, durable goods consumption, and access to credit in the first two years. Effects on housing and labor market outcomes are driven by impacts for female and Black tenants. In the longer run, eviction increases indebtedness and reduces credit scores.
Much of the extant literature predicts market returns with “simple” models that use only a few parameters. Contrary to conventional wisdom, we theoretically prove that simple models severely understate return predictability compared to “complex” models in which the number of parameters exceeds the number of observations. We empirically document the virtue of complexity in U.S. equity market return prediction. Our findings establish the rationale for modeling expected returns through machine learning.
Two homogeneous-good firms compete for a consumer's unitary demand. The consumer is rationally inattentive and pays entropy costs to process information about firms' offers. Compared to a collusion benchmark, competition produces two effects. As in standard models, competition puts downward pressure on prices. But, additionally, an attention effect arises: the consumer engages in trade more often. This alleviates the commitment problem that firms have when facing inattentive consumers and increases trade efficiency. For high enough attention costs, the attention effect dominates the effect on prices: firms' profits are higher under competition than under collusion.
We study how long-lived, rational agents learn in a social network. In every period, after observing the past actions of his neighbors, each agent receives a private signal, and chooses an action whose payoff depends only on the state. Since equilibrium actions depend on higher-order beliefs, it is difficult to characterize behavior. Nevertheless, we show that regardless of the size and shape of the network, the utility function, and the patience of the agents, the speed of learning in any equilibrium is bounded from above by a constant that only depends on the private signal distribution.
This paper considers a linear panel model with interactive fixed effects and unobserved individual and time heterogeneities that are captured by some latent group structures and an unknown structural break, respectively. To enhance realism, the model may have different numbers of groups and/or different group memberships before and after the break. With preliminary nuclear norm regularized estimation followed by row- and column-wise linear regressions, we estimate the break point based on the idea of binary segmentation and the latent group structures together with the number of groups before and after the break by sequential testing K-means algorithm simultaneously. It is shown that the break point, the number of groups and the group memberships can each be estimated correctly with probability approaching one. Asymptotic distributions of the estimators of the slope coefficients are established. Monte Carlo simulations demonstrate excellent finite sample performance for the proposed estimation algorithm. An empirical application to real house price data across 377 Metropolitan Statistical Areas in the US from 1975 to 2014 suggests the presence both of structural breaks and of changes in group membership.
This paper is concerned with possible model misspecification in moment inequality models. Two issues are addressed. First, standard tests and confidence sets for the true parameter in the moment inequality literature are not robust to model misspecification in the sense that they exhibit spurious precision when the identified set is empty. This paper introduces tests and confidence sets that provide correct asymptotic inference for a pseudo-true parameter in such scenarios, and hence, do not suffer from spurious precision. Second, specification tests have relatively low power against a range of misspecified models. Thus, failure to reject the null of correct specification does not necessarily provide evidence of correct specification. That is, model specification tests are subject to the problem that absence of evidence is not evidence of absence. This paper develops new diagnostics for model misspecification in moment inequality models that do not suffer from this problem.