DRI-250 for week of 1-27-13: What Are the Lessons of Econometrics?

An Access Advertising EconBrief:

What Are the Lessons of Econometrics?

Recently, Federal Reserve official Janet Yellen earned attention with a speech in which she justified monetary easing by citing the Fed’s use of a new “macroeconometric model” of the economy. The weight of the term seemed to give it rhetorical heft, as if the combination of macroeconomics and econometrics produced a synergy that each one lacked individually. Does econometrics hold the key to the mysteries of optimal macroeconomic policy? If so, why are we only now finding that out? And, more broadly, is economics really the quantitative science it often pretends to be?

Econometrics

As practiced for roughly eight decades, econometrics combines the knowledge of three fields – economics, mathematics and statistics. Economics develops the pure logic of human choice that gives logical structure to our quantitative investigations into human behavior. Mathematics determines the form in which economic principles are expressed for purposes of statistical analysis. Statistics allows for the systematic processing and analysis of sample data organized into meaningful form using the principles of economics and mathematics.

Suppose we decide to study the market for production and consumption of corn in the U.S. Economics tells us that the principles of supply and demand govern production and consumption. It further tells us that the price of corn will gravitate toward the point at which the aggregate amount of corn that U.S. farmers wish to produce will equal the aggregate amount that U.S. consumers wish to consume and store for future use.

Mathematics advises us to portray this relationship between supply and demand by expressing both as mathematical equations. That is, both supply and demand will be expressed as mathematical functions of relevant variables. The orthodox formulation treats the price of corn as the independent variable and the quantity of corn supplied and demanded, respectively, as the dependent variable of each equation. Other variables, called parameters, are included in the equations as well, but isolated from price in their effects on quantity. Finally, our model of the corn market will stipulate that the two equations will produce an equal quantity demanded and supplied of corn.

Statistics allows us to gather data on corn without having to compile every single scrap of information on every ear of corn produced during a particular year. Instead, sample data (probably provided by government bureaus) can be consulted and carefully processed using the principles of statistical inference.

In principle, this technique can derive equations for both the supply of corn and its demand. These equations can be used either to predict future corn harvests or to explain the behavior of corn markets in the past. For over a half-century, training in econometrics has been a mandatory part of postgraduate education in economics at nearly all American universities.

Does this procedure leave you scratching your head? In particular, are you moved to wonder why mathematics and simultaneous equations should intrude into the study of economics? Or have we outlined a beautiful case of interdisciplinary cooperation in science?

Historical Evolution

As it happens, the development of econometrics was partly owing to the collision of scientific research programs that evolved concurrently in similar directions. Economics has interacted with data virtually since its inception. In the 1600s, Sir William Petty utilized highly primitive forms of quantitative analysis in England to analyze subjects like taxation and trade. Adam Smith populated The Wealth of Nations with various homely numerical examples. In the early 19th century, a French economist named Cournot used mathematics to develop pathbreaking models of monopoly and oligopoly, which anticipated more famous work done many decades later.

A Swiss economist, Leon Walras, and an Italian, Enrico Barone, applied algebraic mathematics to economics by expressing economic relationships in the form of systems of simultaneous equations. They did not attempt to fill in the parametric coefficients of their economic variables with real numbers – in fact, they explicitly denied the possibility of doing so. Their intent was purely symbolic. In effect, they were saying: “Isn’t it remarkable how the relationships in an economic system resemble those in a mathematical system of simultaneous equations? Let’s pretend that an economy of people could be described and analyzed using algebraic mathematics as a tool – and then see what happens.”

At almost the same time (the early 1870s), the British economist William Stanley Jevons developed the principles of marginalism, which have been the cornerstone of economic logic ever since. Economic value is determined at the margin – which means that both producers and consumers gauge the effects of incremental changes in action. If the benefits of the action exceed the costs, they approve the action and take it. If the reverse holds, they spurn it. Their actions produce tendencies toward marginal equality of benefits and costs, similar in principle to the quantity supplied/quantity demanded equality cited above. Jevons thought it amazing that this incremental logic seemed to correspond so closely to the logic inherent in the differential calculus. So he developed his theory of consumer demand in mathematical terms, using calculus. (It is also fascinating that the Austrian simultaneous co-discoverer of marginalism, Carl Menger, refused to employ calculus in his formulations.)

By the early 1900s, the roots of mathematics in economics had taken root. Soon a British mathematician, Ronald Fisher, would modernize the science of statistics. It was only a matter of time until mathematical economists began using statistics to put numbers into the coefficient slots in their equations, which were previously occupied with algebraic letters serving as symbolic place-holders.

In 1932, economist and businessman Alfred Cowles endowed the Cowles Commission at the University of Chicago. The purpose of the Commission was to do economic research, but the research was targeted toward mathematics and economics. The original motto of the Commission was the same as that of the Econometric Society. It was taken from the words of the great physicist Lord Kelvin: “Science is measurement.”

Seldom have three words conveyed so much meaning. The implication was that economics was, or should strive to be, a “science” in exactly the same sense as physics, biology, chemistry and the rest of the hard physical sciences. The physical sciences did science by observing empirical regularities and expressing them mathematically. They tested their theories using controlled laboratory experiments. They were brilliantly successful. The progress of mankind can be traced by following the progression of their work.

In retrospect, it was probably inevitable that social sciences like economics should take this turn – that they should come to define their success, their very meaning, by the extent and degree of their emulation of the natural sciences. The Cowles Commission was the cutting edge of econometrics for the next 20 years, after which time its major focus shifted from empirical to theoretical economics – back to mathematical models of the economy using simultaneous equations. But by that time, econometrics had gained an impregnable beachhead in economics.

The Role of Econometrics

Great hopes were held out for econometrics. Of course, it was young as sciences go, but by careful study and endless trial and error, we would gradually get better and better at creating better economic models, choosing just the right mathematical forms and using exactly the right statistical techniques. Our forecasts would slowly, but surely, improve.

After all, we had a country full of universities whose economists had nothing better to do than monkey around with econometrics. They would submit their findings for review by their peers. The review would lead to revisions. The best studies would be published in the leading economics journals. At last, at long last, we would discover the empirical regularities of economics, the rules and truths that had remained hidden from us for centuries. The entire system of university tenure and promotion would be based on this process, leading to the notorious maxim “publish or perish.” Success would be tied to the value of government research grants acquired to do this research. The brightest young minds would succeed and climb the ladder of university success. They would teach in graduate school. A virtuous cycle of success would produce more learning, better economics, better econometrics, better models, better predictions, more economic prosperity in society, better education for undergraduates and graduate students alike and a better life for all.

As it turned out, none of these hopes have been fulfilled.

Well, that’s not entirely accurate. A system was created that has ruled academic life for decades and, incredibly, shows no sign of slowing down. Young economists are taught econometrics, after a fashion. They dutifully graduate and scurry to a university where they begin the race for tenure. Like workers in a sausage factory, they churn out empirical output that is read by nobody excepting a few of their colleagues. The output then dies an unlamented death in the graveyard of academic journals. The academic system has benefitted from econometrics and continues to do so. It is difficult to imagine this system flourishing in its absence.

Meanwhile, back at the ranch of reality, the usefulness of econometrics to the rest of the world asymptotically approaches zero. Periodically, well-known economists like Edmond Malinvaud and Carl Christ review the history of econometrics and the Cowles Commission. They are laudatory. They praise the Commission’s work and the output of econometricians. But they say nothing about empirical regularities uncovered or benefits to society at large. Instead, they gauge the benefits of econometrics entirely from the volume of studies done and published in professional journals and the effort expended by generations of economists. In so doing, they violate the very standards of their profession, which dictates that the value of output is judged by its consumers, not by its producers, and that value is determined by price in a marketplace rather than by weight on a figurative scale.

It is considered a truism within the economics profession that no theoretical dispute was ever settled by econometrics – that is a reflection of how little trust economists place in it behind closed doors. In practice, economists put their trust in theory and choose their theories on the basis of their political leanings and emotional predilections.

We now know, as surely as we can know anything in life, that we cannot predict the future using econometrics. As Donald (now Deirdre) McCloskey once put it, you can figure this out yourself without even going to graduate school. All you have to do is figuratively ask an econometrician the “American question:” “If you’re so smart, why ain’t you rich?” Accurate predictions would yield untold riches to the predictors, so the absence of great wealth is the surest index of the poverty of econometrics.

Decades of econometric research have yielded no empirical regularities in economics. Not one. No equivalent to Einstein’s equation for energy or the Law of Falling Bodies.

It is true that economists working for private business sometimes generate predictions about individual markets using what appears to be econometrics. But this is deceptive. The best predictions are usually obtained by techniques called “data mining,” that violate the basic precepts of econometrics. The economists are not interested in doing good econometrics or statistics – just in getting a prediction with some semblance of accuracy. Throwing every scrap of data they can get their hands on into the statistical pot and cooking up a predictive result doesn’t tell you much about which variables are the most important or the degree of independent influence each has on the outcome. But the only hope for predictive success may be in assuming that the future is an approximation of the past, in which case the stew pot may cook up a palatable result.

The Great “Statistical Significance” Scandal

In the science of medicine, doctors are sworn to obey the dictum of Hippocrates: “First, do no harm.” For over twenty years, economists Deirdre McCloskey and Stephen Ziliak have preached this lesson to their colleagues in the social sciences. The use of tests of “statistical significance” as a criterion of value was rampant by the 1980s, when the two began their crusade against its misuse. For, as they pointed out, the term is misunderstood not only by the general public but even by the professionals who employ it.

When a variable is found statistically significant, this does not constitute an endorsement of its quantitative importance. It merely indicates the likelihood that the sample upon which the test was conducted was, indeed, randomly chosen according to the canons of statistical inference. That information is certainly useful. But it is not the summum bonum of econometrics. What we usually want to know is what McCloskey and Ziliak refer to as the “oomph” of a variable (or a model in its totality) – how much quantitative effect it has on the thing it affects.

The two modern-day Diogenes conducted two studies of the econometric articles published in the American Economic Review, the leading professional journal. In the 1980s, most of the authors erred in their use and interpretation of the concept of statistical significance. In the 1990s, after McCloskey and Ziliak began writing and speaking out on the problem, the ratio of mistakes increased. Among the culprits were some of the profession’s most distinguished names, including several Nobel Prize winners. When it comes to statistics and econometrics, it seems, economists literally do not know what they are doing.

According to McCloskey – who is herself a practitioner and believer in econometrics – virtually all the empirical work done in econometrics to date will have to be redone. Most of the vast storehouse of econometric work done since the 1930s is worthless.

The Difference Between the Social Sciences and the Natural Sciences

Statistics has been proven to work well in certain contexts. The classical theory of relative-frequency probability is clearly valid, for example; if it weren’t, Las Vegas would have gone out of business long ago. Those who apply statistics properly, like W. Edward Deming, have used it with tremendous success in practical applications. Deming’s legendary methods of quality control involving sampling and testing have been validated time and again across time and cultures.

When econometrics was born, a small band of critics protested its use on the grounds that the phenomena being studies in the social sciences were not amenable to statistical inference. They do not involve replicative, repetitive events that resemble coin flips or dice throws. Instead, they are unique events that involving different elements whose structures differ in innumerable ways. The number of variables involved usually differs between the physical and social sciences, being vastly larger when human beings are the phenomena under study. Moreover, the free will exerted by humans is different from unmotivated, instinctive, chemically or environmentally induced behavior found in nature. Free will can defy quantitative expression, whereas instinctive behavior may be much more tractable.

In retrospect, it now seems certain that those critics were right. Whatever the explanation, the social sciences in general and economics in particular resist the quantitative measurement techniques that took natural sciences to such heights.

The Nature of Valid Economic Prediction

We can draw certain quantitative conclusions on the basis of economic theory. The Law of Demand says that when the price of something rises, desired purchases of that thing will fall – other things equal. But it doesn’t say how much they’ll fall. And we know intuitively that, in real life, other things are never unchanged. Yet despite this severely limited quantitative content, there is no proposition in economic theory that has demonstrated more practical value.

Economists have long known that agriculture is destined to claim a smaller and smaller share of total national income as a nation gets wealthier. There is no way to predict the precise pattern of decrease, but we know that it will happen. Why? Agricultural goods are mostly either food or fiber. We realize instinctively that when our real incomes increase, we will purchase more food and more clothing – but not in proportion to the increase in income. That is, a 20% increase in real income will not motivate us to eat 20% more food – not even Diamond Jim Brady was that gluttonous. Similarly, increases in agricultural productivity will increase output and lower price over time. But a 20% decline in food prices will not call forth 20% more desired food purchases. Economists say that the demand for agricultural good is price- and income-inelastic.

These are the types of quantitative predictions economists can make with a clear conscience. They are couched in terms of “more” or “less,” not in terms of precise numerical predictions. They are what Nobel laureate F. A. Hayek called “pattern predictions.”

It is one of history’s great ironies that Hayek, an unrelenting critic of macroeconomics and foe of statistics and econometrics, nevertheless made some of the most prescient economic predictions of the 20th century. In 1929, Hayek predicted that the economic boom of the 1920s would soon end in economic contraction – which it did, with a vengeance. (Hayek’s mentor, Ludwig von Mises, went even further by refusing a prestigious appointment because he anticipated that “a great crash” was imminent.) In the 1930s, both Hayek and von Mises predicted the failure of the Soviet economy due to its lack of a functioning price system, particularly the absence of meaningful interest rates. That prediction, too, eventually bore fruit. In the 1950s, Hayek declared that Keynesian economic policies would produce accelerating inflation. Western industrial nations endured withering bouts of inflation beginning in the late 1960s and lasting for over a decade. Then Hayek broke with his fellow economists by insisting that this inflationary cycle could be broken, but only by drastically slowing the rate of monetary growth and enduring the resulting recession for as long as it lasted. Right again – and the recession was followed by two decades of prosperity that came to be known as the Great Moderation.

Ask the Fed

One of the tipoffs to the complicity of the mainstream press in the Obama administration’s policies is the fact that nobody has thought to ask Janet Yellen questions like this: “If your macroeconometric model is good enough for you to rely on it as a basic for a highly unconventional set of policies, why did it not predict the decline in Gross Domestic Product in fourth quarter 2012? Or if it did, why did the Fed keep that news a secret from the public?”

The press doesn’t ask those questions. Perhaps they are cowed by the subject of “macroeconometrics.” In fact, macroeconomics and econometrics are the two biggest failures of contemporary economics. And there are those who would substitute the word “frauds” for “failures.” Unless you take the position that combining two failures rates to produce a success, there is no reason to expect anything valuable from macroeconometrics.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s