We show that two important issues in empirical asset pricing—the presence of weak factors and the selection of test assets—are deeply connected. Since weak factors are those to which test assets have limited exposure, an appropriate selection of test assets can improve the strength of factors. Building on this insight, we introduce supervised principal component analysis (SPCA), a methodology that iterates supervised selection, principal-component estimation, and factor projection. It enables risk premia estimation and factor model diagnosis even when weak factors are present and not all factors are observed. We establish SPCA's asymptotic properties and showcase its empirical applications.
This article provides a general framework to study the role of production networks in international GDP comovement. We first derive an additive decomposition of bilateral GDP comovement into components capturing shock transmission and shock correlation. We quantify this decomposition in a parsimonious multi-country, multi-sector dynamic network propagation model, using data for the G7 countries over the period 1978–2007. Our main finding is that while the network transmission of shocks is quantitatively important, it accounts for a minority of observed comovement under the estimated range of structural elasticities. Contemporaneous responses to correlated shocks in the production network are more successful at generating comovement than intertemporal propagation through capital accumulation. Extensions with multiple shocks, nominal rigidities, and international financial integration leave our main result unchanged. A combination of TFP and labour supply shocks is quantitatively successful at reproducing the observed international business cycle.
We introduce a new class of algorithms, stochastic generalized method of moments (SGMM), for estimation and inference on (overidentified) moment restriction models. Our SGMM is a novel stochastic approximation alternative to the popular Hansen (1982) (offline) GMM, and offers fast and scalable implementation with the ability to handle streaming datasets in real time. We establish the almost sure convergence, and the (functional) central limit theorem for the inefficient online 2SLS and the efficient SGMM. Moreover, we propose online versions of the Durbin–Wu–Hausman and Sargan–Hansen tests that can be seamlessly integrated within the SGMM framework. Extensive Monte Carlo simulations show that as the sample size increases, the SGMM matches the standard (offline) GMM in terms of estimation accuracy and gains over computational efficiency, indicating its practical value for both large-scale and online datasets. We demonstrate the efficacy of our approach by a proof of concept using two well-known empirical examples with large sample sizes.
As deficits rise and concerns about tax avoidance by the rich increase, we study how unrealized gains and borrowing affect Americans’ income taxes. We have four main findings: First, measuring “economic income” as currently-taxed income plus new unrealized gains, the income tax base captures 60% of economic income of the top 1% of wealth-holders (and 71% adjusting for inflation) and the vast majority of income for lower wealth groups. Second, adjusting for unrealized gains substantially lessens the degree of progressivity in the income tax, although it remains largely progressive. Third, we quantify for the first time the amount of borrowing across the full wealth distribution. Focusing on the top 1%, while total borrowing is substantial, new borrowing each year is fairly small (1-2% of economic income) compared to their new unrealized gains, suggesting that “buy, borrow, die” is not a dominant tax avoidance strategy for the rich. Fourth, consumption is less than liquid income for rich Americans, partly because the rich have a large amount of liquid income, and partly because their savings rates are high, suggesting that the main tax avoidance strategy of the super-rich is “buy, save, die.”
We introduce two data-driven procedures for optimal estimation and inference in nonparametric models using instrumental variables. The first is a data-driven choice of sieve dimension for a popular class of sieve two-stage least-squares estimators. When implemented with this choice, estimators of both the structural function h0 and its derivatives (such as elasticities) converge at the fastest possible (i.e. minimax) rates in sup-norm. The second is for constructing uniform confidence bands (UCBs) for h0 and its derivatives. Our UCBs guarantee coverage over a generic class of data-generating processes and contract at the minimax rate, possibly up to a logarithmic factor. As such, our UCBs are asymptotically more efficient than UCBs based on the usual approach of undersmoothing. As an application, we estimate the elasticity of the intensive margin of firm exports in a monopolistic competition model of international trade. Simulations illustrate the good performance of our procedures in empirically calibrated designs. Our results provide evidence against common parameterizations of the distribution of unobserved firm heterogeneity.
We study agents who are more likely to remember some experiences than others but update beliefs as if the experiences they remember are the only ones that occurred. To understand the long-run effects of selective memory, we propose selective-memory equilibrium. We show that if the agent’s behavior converges, their limit strategy is a selective-memory equilibrium, and we provide a sufficient condition for behavior to converge. We use this equilibrium concept to explore the consequences of several well-documented biases. We also show that there is a close connection between selective-memory equilibria and the outcomes of misspecified learning.
We develop a state-space model with a transition equation that takes the form of a functional vector autoregression (VAR) and stacks macroeconomic aggregates and a cross-sectional density. The measurement equation captures the error in estimating log densities from repeated cross-sectional samples. The log densities and their transition kernels are approximated by sieves, which leads to a finite-dimensional VAR for macroeconomic aggregates and sieve coefficients. With this model, we study the dynamics of technology shocks, GDP (gross domestic product), employment, and the earnings distribution. We find that spillovers between aggregate and distributional dynamics are generally small, that a positive technology shock tends to decrease inequality, and that a shock that raises earnings inequality leads to a small and insignificant GDP response.
We consider a broad class of spatial models where there are many types of interactions across a large number of locations. We provide a new theorem that offers an iterative algorithm for calculating an equilibrium and sufficient and "globally necessary" conditions under which the equilibrium is unique. We show how this theorem enables the characterization of equilibrium properties for one important spatial system: an urban model with spillovers across a large number of different types of agents. An online appendix provides 12 additional examples of both spatial and nonspatial economic frameworks for which our theorem provides new equilibrium characterizations.
We use a large cross section of equity returns to estimate a rich affine model of equity prices, dividends, returns, and their dynamics. Our model prices dividend strips of the market and equity portfolios without using strips data in the estimation. Yet model-implied equity yields closely match yields on traded strips. Our model extends equity term-structure data over time (to the 1970s) and across maturities, and generates term structures for various equity portfolios. The novel cross section of term structures from our model covers 45 years and includes several recessions, providing a novel set of empirical moments to discipline asset pricing models.
In the introduction to Activity Analysis of Production and Allocation (Cowles Monograph No. 13), Tjalling C. Koopmans recalled that he developed the model of his “Optimal Utilization of the Transportation System” (in the proceedings of 1947 International Statistical Congress, which were reissued as an Econometrica supplement, 1949) “under the stimulation of statistical work for the Combined Shipping Adjustment Board, the British-American board dealing with merchant shipping problems during the second world war.” Similarly, the contributions of George B. Dantzig and Marshall K. Wood to Cowles Monograph No. 13 (two revised journal articles and five new chapters) emerged from wartime work for the US Army Air Force and postwar work for the Department of the Air Force. This article examines the context and consequences of the wartime roots of these foundational contributions to activity analysis and linear programming, with particular attention to Koopmans's 1942 memorandum for the Combined Shipping Adjustment Board titled “Exchange Ratios between Cargoes on Various Routes” (first published in his Scientific Papers, 1970).