We study the socially optimal level of illiquidity in an economy populated by households with taste shocks and present bias with naive beliefs. The government chooses mandatory contributions to accounts, each with a different pre-retirement withdrawal penalty. Collected penalties are rebated lump sum. When households have homogeneous present bias, β, the social optimum is well approximated by a single account with an early-withdrawal penalty of 1−β. When households have heterogeneous present bias, the social optimum is well approximated by a two-account system: (i) an account that is completely liquid and (ii) an account that is completely illiquid until retirement.
Welfare depends on the quantity, quality, and range of goods consumed. We use trade data, which report the quantities and prices of the individual goods that countries exchange, to learn about how the gains from trade and growth break down into these different margins. Our general equilibrium model, in which both quality and quantity contribute to consumption and to production, captures (i) how prices increase with importer and exporter per capita income, (ii) how the range of goods traded rises with importer and exporter size, and (iii) how products traveling longer distances have higher prices. Our framework can deliver a standard gravity formulation for total trade flows and for the gains from trade. We find that growth in the extensive margin contributes to about half of overall gains. Quality plays a larger role in the welfare gains from international trade than from economic growth due to selection.
We fully solve a sorting problem with heterogeneous firms and multiple heterogeneous workers whose skills are imperfect substitutes. We show that optimal sorting, which we call mixed and countermonotonic, is comprised of two regions. In the first region, mediocre firms sort with mediocre workers and coworkers such that the output losses are equal across all these teams (mixing). In the second region, a high-skill worker sorts with low-skill coworkers and a high-productivity firm (countermonotonicity). We characterize the equilibrium wages and firm values. Quantitatively, our model can generate the dispersion of earnings within and across US firms.
We show that two important issues in empirical asset pricing—the presence of weak factors and the selection of test assets—are deeply connected. Since weak factors are those to which test assets have limited exposure, an appropriate selection of test assets can improve the strength of factors. Building on this insight, we introduce supervised principal component analysis (SPCA), a methodology that iterates supervised selection, principal-component estimation, and factor projection. It enables risk premia estimation and factor model diagnosis even when weak factors are present and not all factors are observed. We establish SPCA's asymptotic properties and showcase its empirical applications.
This article provides a general framework to study the role of production networks in international GDP comovement. We first derive an additive decomposition of bilateral GDP comovement into components capturing shock transmission and shock correlation. We quantify this decomposition in a parsimonious multi-country, multi-sector dynamic network propagation model, using data for the G7 countries over the period 1978–2007. Our main finding is that while the network transmission of shocks is quantitatively important, it accounts for a minority of observed comovement under the estimated range of structural elasticities. Contemporaneous responses to correlated shocks in the production network are more successful at generating comovement than intertemporal propagation through capital accumulation. Extensions with multiple shocks, nominal rigidities, and international financial integration leave our main result unchanged. A combination of TFP and labour supply shocks is quantitatively successful at reproducing the observed international business cycle.
We introduce a new class of algorithms, stochastic generalized method of moments (SGMM), for estimation and inference on (overidentified) moment restriction models. Our SGMM is a novel stochastic approximation alternative to the popular Hansen (1982) (offline) GMM, and offers fast and scalable implementation with the ability to handle streaming datasets in real time. We establish the almost sure convergence, and the (functional) central limit theorem for the inefficient online 2SLS and the efficient SGMM. Moreover, we propose online versions of the Durbin–Wu–Hausman and Sargan–Hansen tests that can be seamlessly integrated within the SGMM framework. Extensive Monte Carlo simulations show that as the sample size increases, the SGMM matches the standard (offline) GMM in terms of estimation accuracy and gains over computational efficiency, indicating its practical value for both large-scale and online datasets. We demonstrate the efficacy of our approach by a proof of concept using two well-known empirical examples with large sample sizes.
As deficits rise and concerns about tax avoidance by the rich increase, we study how unrealized gains and borrowing affect Americans’ income taxes. We have four main findings: First, measuring “economic income” as currently-taxed income plus new unrealized gains, the income tax base captures 60% of economic income of the top 1% of wealth-holders (and 71% adjusting for inflation) and the vast majority of income for lower wealth groups. Second, adjusting for unrealized gains substantially lessens the degree of progressivity in the income tax, although it remains largely progressive. Third, we quantify for the first time the amount of borrowing across the full wealth distribution. Focusing on the top 1%, while total borrowing is substantial, new borrowing each year is fairly small (1-2% of economic income) compared to their new unrealized gains, suggesting that “buy, borrow, die” is not a dominant tax avoidance strategy for the rich. Fourth, consumption is less than liquid income for rich Americans, partly because the rich have a large amount of liquid income, and partly because their savings rates are high, suggesting that the main tax avoidance strategy of the super-rich is “buy, save, die.”
We introduce two data-driven procedures for optimal estimation and inference in nonparametric models using instrumental variables. The first is a data-driven choice of sieve dimension for a popular class of sieve two-stage least-squares estimators. When implemented with this choice, estimators of both the structural function h0 and its derivatives (such as elasticities) converge at the fastest possible (i.e. minimax) rates in sup-norm. The second is for constructing uniform confidence bands (UCBs) for h0 and its derivatives. Our UCBs guarantee coverage over a generic class of data-generating processes and contract at the minimax rate, possibly up to a logarithmic factor. As such, our UCBs are asymptotically more efficient than UCBs based on the usual approach of undersmoothing. As an application, we estimate the elasticity of the intensive margin of firm exports in a monopolistic competition model of international trade. Simulations illustrate the good performance of our procedures in empirically calibrated designs. Our results provide evidence against common parameterizations of the distribution of unobserved firm heterogeneity.
We study agents who are more likely to remember some experiences than others but update beliefs as if the experiences they remember are the only ones that occurred. To understand the long-run effects of selective memory, we propose selective-memory equilibrium. We show that if the agent’s behavior converges, their limit strategy is a selective-memory equilibrium, and we provide a sufficient condition for behavior to converge. We use this equilibrium concept to explore the consequences of several well-documented biases. We also show that there is a close connection between selective-memory equilibria and the outcomes of misspecified learning.
We develop a state-space model with a transition equation that takes the form of a functional vector autoregression (VAR) and stacks macroeconomic aggregates and a cross-sectional density. The measurement equation captures the error in estimating log densities from repeated cross-sectional samples. The log densities and their transition kernels are approximated by sieves, which leads to a finite-dimensional VAR for macroeconomic aggregates and sieve coefficients. With this model, we study the dynamics of technology shocks, GDP (gross domestic product), employment, and the earnings distribution. We find that spillovers between aggregate and distributional dynamics are generally small, that a positive technology shock tends to decrease inequality, and that a shock that raises earnings inequality leads to a small and insignificant GDP response.