“Generalized Method of Moments”: A Brief History

In March 1979, Lars Peter Hansen, a young Ph.D. fresh out of the University of Minnesota, submitted a paper to the prestigious Econometrica. It described a statistical methodology that, in its final form, would allow economists to draw strong conclusions from models that weren’t completely specified (that is, not all variables, relationships or assumptions were included or precisely defined).

This “generalized method of moments” would give econometricians the ability to appraise alternative theories and investigate important economic phenomena without fully developing each of their elements. Researchers could rely on the most powerful explanatory variables and dispense with unnecessary assumptions. “GMM allows you to ‘do something without having to do everything simultaneously,’” Hansen explains.

But the GMM—abstract and mathematically challenging—was not immediately embraced by the field. (Indeed, Hansen’s initial draft was rejected by Econometrica, spurring him to refine and generalize his argument.) Hansen and his colleagues persevered, demonstrating the methodology’s power and range by applying it to exchange rates, asset pricing models and rational expectations theory. These and other examples gradually convinced economists of its utility and, with time, GMM became the gold standard. In 2013, Hansen received the Nobel Prize in economic sciences for his methodology, specifically in reference to its ability to evaluate asset pricing models.

Hansen continues to study asset prices, focusing on linkages between financial markets and the broader macroeconomy. Recent work looks at uncertainty and risk tolerance in asset pricing behavior; he’s also developed methods to analyze and account for the uncertainty of the households and businesses that populate economic models, and also for the uncertainty that econometricians have about the adequacy of their models. Related research examines policymaking under uncertainty.