DRI-231 for week of 1-19-14: How to Save Thousands of Medical- Patient Lives Annually and Save Money, Too

An Access Advertising EconBrief:

How to Save Thousands of Medical- Patient Lives Annually and Save Money, Too

The best-kept secret about economics is that its most famous practitioners are its least proficient. Most of what the general public knows about economics falls under the rubric of macroeconomics. That is another way of saying it is wrong, or at least wrong-headed. Meanwhile, microeconomics is the well-developed, solidly upholstered sibling of macroeconomics. It contains the body of general economic theory. It was born at least a couple centuries before macroeconomics had its modern unveiling. Today microeconomics lives a cloistered life behind ivy-covered walls, seduced from its cozy residence inside textbooks by the lectures of a select priesthood. Almost all the good done by economics in the world comes from microeconomics.

Microeconomics is often called price theory because its centerpiece is the theory of supply and demand that governs price determination. Prices are the common denominators that permit the expression of human valuation. And the refinement of a coherent, rational valuation process is what has lifted humanity out of the mire of poverty and stagnation and into the light of prosperity.

The last five years have made the poverty of macroeconomics painfully apparent. A recent op-ed in The Wall Street Journal demonstrates the potential value of price theory to save thousands of patients stranded on waiting lists for donated human organs for transplant. The op-ed is an ice-cold bath in the wellsprings of human reason.

Price Theory and Organ Donation
As with most countries in the world, the United States forbids the commercial sale and purchase of human organs. The “market” for organs, such as it is, is limited to highly regulated operations by non-profit firms that serve as clearinghouses for organs donated by private individuals under official auspices. In effect, both suppliers of organs and demanders face a market price of zero, which encourages consumers to maximize the amount they demand and suppliers to minimize the number of organs they supply.  This is a recipe for a shortage of human organs, and that’s just what we have.

At a market price of zero, the quantity of a transplantable organ that potential recipients want to acquire will exceed the quantity of that same type of organ that potential donors will wish to supply to the market. The difference – the excess of quantity demanded over quantity supplied – constitutes the marketplace “shortage” of that organ.

It is impossible to overstress the importance of this chronic shortage. It is the key to the predicament of participants in this market. When the amount of any good available for sale in the market falls short of the amount that buyers wish to purchase, some means must be found to ration the existing quantity supplied among the overabundance of buyers. In the market for transplanted organs, the rationing mechanism is placement upon the list of recipients awaiting a replacement organ. Each list member must await his or her turn. Economists call this rationing by queue.

In a garden-variety market, price rations the quantity sold. The equilibrating unit – the one that equalizes the ex ante amount that buyers wish to buy and sellers wish to sell – is valued equally by both buyers and sellers; that is why the equilibrium outcome is stable and could theoretically persist indefinitely. If the price were higher, quantity supplied would exceed quantity demanded and sellers would have unsold product. If the price were lower, quantity demanded would exceed quantity supplied and some buyers would be frustrated. Failure to produce up to the equilibrium point would strand production at a point where buyers value an additional unit of output more than producers value the resources necessary to produce that unit, while pushing production beyond the equilibrium point would require the expenditure of more money on the marginal unit produced than the value placed on that unit by consumers.

Under the shortage caused by the artificial zero-price environment, the human-organ market is highly unstable. This is a technical way of saying that the unhappiness of people on both sides of the market is a continual source of friction and momentum toward change. Economists have compiled a stylized list of the sources of unhappiness associated with below-equilibrium prices, called maximum prices. This list has its mirror image for prices held above equilibrium, called minimum prices.

Shortages: When prices are not allowed to equilibrate markets, shortages become        chronic. This accumulation of frustration among buyers can cause anything from mild dissatisfaction to riots in the streets, depending on the good or service in shortage. In  the former Soviet Union, prices of most consumer goods were set administratively wellbelow their equilibrium levels. Ordinary citizens waited in line for hours each day – or  hired people to wait for them – for a chance at these goods.

Black markets: When a legal maximum price causes the shortage, the key recipe          ingredient for a black market is present. Buyers want to pay more than the legal maximum for additional units of the good; in our case, they want it very badly. By definition, criminals are people willing to violate the law for personal gain. So we  should expect a black market to arise under these circumstances. (Despite this, authorities invariably profess shock and disappointment when it happens.) In wartime, price controls imposed by governments invariably cause the emergence of black    markets for consumer goods.

Discrimination by producers against consumers: There are more buyers than there are          units of output available to be bought, so producers can typically pick and choose the   recipients of that output. If producers have a “taste for discrimination” against certain consumers, they can indulge it without suffering the loss of sales revenue felt by aproducer in a competitive market. Because the cost of discrimination is reduced, we should expect to find more of it under these circumstances – and we do. In the U.S., prices and wages set administratively by governments, public utilities and unions historically allowed these bodies to discriminate against blacks.

Queuing costs: Since a queue substitutes for a higher price, we would expect it to impose real costs on consumers. The costs come in the form of waiting to consumerather than consuming immediately. In the human-organ market, waiting means   human suffering and death.

Quality deterioration: If a higher price is legally foreclosed, a possible alternative option for providing additional output is a reduction in the quality of the output. Quality deterioration is an indirect form of price increase that, while quite possibly inferior to the buyer, may still be preferable to not obtaining the output. In the market for donated human kidneys, this takes the form of utilizing an inferior substitute. Kidney dialysis is      a cumbersome, painful, costly and time-consuming process that can cleanse a diseased kidney of impurities for a limited time. This is the normal option selected by  would-be transplant recipients while waiting for a donor kidney.

An Application of Price Theory: Creating a Legal Market for Human Kidneys
Nobel laureate economist Gary S. Becker and colleague Julio J. Elias make the case for creating a legal market for human organs in their Wall Street Journal op-ed, “Cash for Kidneys” (1/18-19/2014).

By far the most commonly transplanted human organ is the kidney. In 2012, the waiting list for donated kidneys held 95,000 people, but only 16,500 kidney transplants were actually performed.In economic terms, this represents a chronic shortage of nearly 80,000 kidneys. For those people who finally succeed in obtaining a kidney via the queue, the average wait is roughly 4.5 years. The situation is not only dire but worsening. In 2002, the waiting list contained 54,000 people and the average wait was 2.9 years.

In 2012, about 4,500 people died while waiting for a donated kidney. Messrs. Becker and Elias propose changing the status quo to virtually eliminate the waiting list and the annual death toll. The means to that end is the creation of a legal, functioning market in which human kidneys are bought and sold at a market price. That would tend to equalize the number of kidneys demanded and supplied, thereby solving the problem.

The purpose behind the sale of kidneys is to gain a suitable organ for transplantation into a patient whose remaining kidney is failing. Transplant surgery is quite expensive and the follow-up treatment requires administration of extensive immunosuppressive drugs to prevent the body from rejecting the transplanted organ. The overall tab for the surgery and treatment is about $150,000. Yet this is a bargain. The kidney dialysis treatments required to sustain ailing kidney patients who await a donor kidney average $80,000 per year; an average wait of 4.5 years would involve total expenditure of $360,000 to produce only 1/3 the benefit of a transplant. (Since dialysis patients suffer pain and greatly reduced quality of life, this actually understates the advantage of the transplant.)

Becker and Elias have studied transplantation in countries that utilize an “implied consent” approach to transplantation, one which allows hospitals and doctors to take kidneys from the dead without obtaining prior consent or permission of relatives, unless the donor has explicitly opted out of donation beforehand. (American consumers would recognize this system under the name of “negative ballot.”) They find that the shortage of kidneys still persists. They estimate that a fee of about $15,000 would be sufficient to virtually eliminate the U.S. shortage. (Actually, they suggest a range of $5,000-$25,000, with $15,000 as a midpoint estimate. The authors note, for comparison, that the average fee paid to a surrogate mother is about $20,000.)This would increase the cost of kidney transplant surgery, but not nearly enough to affect the relative superiority of transplantation versus dialysis.

Becker and Elias wrote their op-ed for a knowledgeable audience, so they refrained from pointing out most of the time-honored principles stressed here. Perhaps the most subtle point omitted is the knowledge-gathering function performed by free markets. Markets collate dispersed information possessed by billions of people in fragmentary form by giving everybody an incentive to contribute to the information-gathering. A purely administrative market like the organ-donor market in its present form lacks this capability. Non-profits and foundations may have noble motives, but they cannot come remotely close to equaling the information generated by a functioning free market.

That is important here because kidney transplantation is a complex process. Organ and patient must be compatible in terms of tissue and blood type. The harvesting must be timed to preserve the kidney and effect the transplant as soon as possible. This is exactly the kind of process that is done badly by administrative fiat and done well by free markets. If the Becker/Elias proposal is adopted, it is likely that the process would be improved by the use of forward markets in which willing sellers would sell their organ for future use upon their death.

Becker and Elias conclude their case hopefully. “Eventually, the advantages of allowing payment for organs would become obvious. At that point, people will wonder why it took so log to adopt such an obvious and sensible solution to the shortage of organs for transplant.”

Arguments (?) Against a Legal Market for Human Kidneys
Viewed from any rational perspective, the case for a legal market in human kidneys would seem ironclad. The arguments against a legal market – if “arguments” is the right characterization – mostly revolve around the “repugnance” of the concept.

To be sure, the idea of buying and selling body parts is unfamiliar. Perhaps the image most frequently conjured up is the “body snatcher” who supplied cadavers to medical schools in 18th century Europe by surreptitiously digging up bodies from cemeteries and selling them. Of course, this is an example of the black markets described above; it is the kind of problem that legal markets solve rather than create.

The repugnance associated with legal markets for human body parts is a holdover of the millennia-old, instinctive fear and distrust of monetary exchange. In man’s tribal past, money conveyed the ability to transact beyond the boundaries set by the tribe. Tribe members could produce and sell directly to strangers and indirectly to people whose very identities were unknown. The power of the tribe were thereby circumvented and lessened. Transactions that were innocuous within the tribe were thereby forbidden outside the tribe. Certain activities became tainted by association with money.

That ancient taboo still lingers in aphorisms such as “money is the root of all evil.” Donation of organs is considered altruistic while sale is demonized, despite the fact that both parties to the transaction gain in either case and monetary transaction greatly enhances the scope of the market and total benefits generated. There is no rational basis for this taboo. Mankind has gradually evolved away from it, if only because monetary exchange greatly enhances our survival chances.

A related argument maintains that commercial transactions in human organs would coarsen our culture by deadening our sensitivities. It is not clear just whose sensitivities are at risk. Presumably it is not corpses from whom organs are harvested – to be sure, with prior consent or consent or surviving relatives. After all, the major religions and prevailing cultural norms encourage us in the belief that our bodies are mere physical shells that house our inner souls. Regardless of whether the soul survives physical death, the body is now mere refuse to be disposed of respectfully. Exactly how can it be politically de rigueur to recycle newsprint but unthinkable to recycle body parts?

Even more to the point, the sensitivities of transplant recipients are clearly left out of this accounting. One wonders how the 4,500 martyrs to the cause of cultural sensitivity in 2012 felt about the cause for which they gave their lives. Dialysis patients age 45-49 can look forward to an average life span of 8 years, compared to the 23 years they could expect with a transplant. Do they regard this differential as a noble sacrifice for the aesthetic elevation of society?

Since a normal human body contains two kidneys but can function adequately with one, healthy people can and do provide a kidney to sick people who need one. Opponents of a free market in kidneys foresee that the poor will be “exploited” in this market, either because they will be inadequately compensated or because they will suffer “seller’s remorse” and later regret the sale of their kidney.

It is bizarre to suppose that ordinary people can be called heroes for making a split-second decision to give their lives to save others in a fire or traffic accident, but the same people would be suckers if they made a carefully considered decision to sell a kidney in an orderly market. The tipoff to the political opportunism behind this particular argument is that other people insist it is the transplant recipient who is being exploited by the organ seller, who will demand an exorbitant fee and thereby profit unduly. Clearly, both parties cannot be exploiting each other; at least one of these arguments must be wrong. In fact, they both are. There is no reason to expect anybody to be exploited by a legal market for human organs. It is black markets that pose a danger of exploitation, and these would be replaced by the legal market.

It is ironic that when the rich and famous -Steven Jobs and Mickey Mantle come to mind immediately – somehow succeed in jumping the queue to obtain organs, there is the usual public outcry but the criers are obsessed with preventing the rich from being happy rather than with allowing the poor to enjoy the same benefits. This is where proponents of Obamacare should be concentrating their efforts – on providing catastrophic-care insurance to cover transplants and charitable donations for the uninsured.

Then there are the quasi-socialists who insist that nothing worth doing is worth earning a profit on. Since profit directs the flow of resources where they are most needed, no economist could make this case with a straight face. But it is guaranteed a permanent home on the political Left.

Almost 30 years ago, the author of this essay advanced the argument for a legal market in human body parts to his university economics class. Objections focused on presumed abuses; one student facetiously volunteered that “people would be selling granny.” Of course, creation of a legal market would replace the black markets where abuses by criminal suppliers were a foregone conclusion. The author pointed this out and went so far as to conjecture that black markets were even then operating. Subsequent investigations have vindicated that prediction, going so far as to report murder and abduction rings engaged in organ collection and at least one government allegedly involved in the practice.

There is no respectable intellectual argument against the legal market in human organs that Becker and Elias champion. Few if any free-market reforms can boast such an overwhelming case in their favor as this one.

Microeconomics vs. Macroeconomics
Compare the annual saving or upgrading of 80,000 lives more or less with the incapable efforts of the Federal Reserve and the President’s Council of Economic Advisors. This comparison indexes the relative merits of microeconomics and macroeconomics, respectively.

DRI-234 for week of 11-17-13: Economists Start to See the Light – and Speak Up

An Access Advertising EconBrief:

Economists Start to See the Light – and Speak Up

In order for dreadful economic policies to be ended, two things must happen. Economists must recognize the errors – then, having seen the light, they must say so publicly. For nearly five years, various economists have complained about Federal Reserve economic policies. Unfortunately, the complaints have been restrained and carefully worded to dilute their meaning and soften their effect. This has left the general public confused about the nature and degree of disagreement within the profession. It has also failed to highlight the radicalism of the Fed’s policies.

Two recent Wall Street Journal economic op-eds have broken this pattern. They bear unmistakable marks of acuity and courage. Both pieces focus particularly on the tactic of quantitative easing, but branch out to take in broader issues in the Fed’s conduct of monetary policy.

A Monetary Insider Kneels at the Op-Ed Confessional to Beg Forgiveness

Like many a Wall Street bigwig, Andrew Huszar has led a double life as managing director at Morgan Stanley and Federal Reserve policymaker. After he served seven years at the Fed from 2001-2008, good behavior won him a parole to Morgan Stanley. But when the Great Financial Crisis hit, TARP descended upon the landscape. This brought Huszar a call to return to public service in spring, 2009 as manager of the Fed’s program of mortgage-backed securities purchases. In “Confessions of a Quantitative Easer” (The Wall Street Journal, 11/12/2013), Huszar gives us the inside story of his year of living dangerously in that position.

Despite his misgivings about what he perceived as the Fed’s increasing subservience to Wall Street, Huszar accepted the post and set about purchasing $1.25 trillion (!) of mortgage-backed securities over the next year. This was the lesser-known half of the Fed’s quantitative-easing program, the little brother of the Fed’s de facto purchases of Treasury debt. “Senior Fed officials… were publicly acknowledging [past] mistakes and several of those officials emphasized to me how committed they were to a major Wall Street revamp.” So, he “took a leap of faith.”

And just what, exactly, was he expected to have faith in? “Chairman Ben Bernanke made clear that the Fed’s central motivation was to ‘affect credit conditions for households and businesses.'” Huszar was supposed to “quarterback the largest economic stimulus in U.S. history.”

So far, Huszar’s story seems straightforward enough. For over half a century, economists have had a clear idea of what it meant to stimulate an economy via central-bank purchases of securities. That idea has been to provide banks with an increase in reserves that simultaneously increases the monetary base. Under the fractional-reserve system of banking, this increase in reserves will allow banks to increase lending, causing a pyramidal increase in reserves, money, spending, income and employment. John Maynard Keynes himself was dubious about this use of monetary policy, at least during the height of a depression, because he feared that businesses would be reluctant to borrow in the face of stagnant private demand. However, Keynes’ neo-Keynesian successors gradually came to understand that the simple Keynesian remedy of government deficit spending would not work without an accompanying increase in the money stock – hence the need for reinforcement of fiscal stimulus with monetary stimulus.

Only, doggone it, things just didn’t seem to work out that way. Sure enough, the federal government passed a massive trillion-dollar spending measure that took effect in 2009. But “it wasn’t long before my old doubts resurfaced. Despite the Fed’s rhetoric, my program wasn’t helping to make credit any more accessible for the average American. The banks were only issuing fewer and fewer loans. More insidiously, whatever credit they were issuing wasn’t getting much cheaper. QE may have been driving down the wholesale cost for banks to make loans, but Wall Street was pocketing most of the extra cash.”

Just as worrisome was the reaction to the doubts expressed by Huszar and fellow colleagues within the Fed. Instead of worrying “obsessively about the costs versus the benefits” of their actions, policymakers seemed concerned only with feedback from Wall Street and institutional investors.

When QE1 concluded in April, 2010, Huszar observed that Wall Street banks and near-banks had scored a triple play. Not only had they booked decent profits on those loans they did make, but they also collected fat brokerage fees on the Fed’s securities purchases and saw their balance sheets enhanced by the rise in mortgage-security prices. Remember – the Fed’s keenness to buy mortgage-backed securities in the first place was due primarily to the omnipresence of these securities in bank portfolios. Indeed, mortgage-backed securities served as liquid assets throughout the financial system and it was their plummeting value during the financial crisis that caused the paralyzing credit freeze. Meanwhile, “there had been only trivial relief for Main Street.”

When, a few months later, the Fed announced QE2, Huszar “realized the Fed had lost any remaining ability to think independently from Wall Street. Demoralized, I returned to the private sector.”

Three years later, this is how Huszar sizes up the QE program. “The Fed keeps buying roughly $85 billion in bonds a month, chronically delaying so much as a minor QE taper. Over five years, its purchases have come to more than $4 trillion. Amazingly, in a supposedly free-market nation, QE has become the largest financial-market intervention by any government in world history.”

“And the impact? Even by the Fed’s sunniest calculations, aggressive QE over five years has generated only a few percentage points of U.S. growth. By contrasts, experts outside the Fed…suggest that the Fed may have [reaped] a total return of as little as 0.25% of GDP (i.e., a mere $40 billion bump in U.S. economic output).” In other words, “QE isn’t really working” –

except for Wall Street, where 0.2% of U.S. banks control 70% total U.S. bank assets and form “more of a cartel” than ever. By subsidizing Wall Street banks at the expense of the general welfare, QE had become “Wall Street’s new ‘too big to fail’ policy.”

The Beginning of Wisdom

Huszar’s piece gratifies on various levels. It answers one question that has bedeviled Fed-watchers: Do the Fed’s minions really believe the things the central bank says? The answer seems to be that they do – until they stop believing. And that happens eventually even to high-level field generals.

It is obvious that Huszar stopped drinking Federal Reserve Kool-Aid sometime in 2010. The Fed’s stated position is that the economy is in recovery – albeit a slow, fragile one – midwived by previous fiscal and monetary policies and preserved by the QE series. Huszar doesn’t swallow this line, even though dissent among professional economists has been muted over the course of the Obama years.

Most importantly, Huszar’s eyes have been opened to the real source of the financial crisis and ensuing recession; namely, government itself. “Yes, those financial markets have rallied spectacularly…but for how long? Experts…are suggesting that conditions are again ‘bubble-like.'”

Having apprehended this much, why has Huszar’s mind stopped short of the full truth? Perhaps his background, lacking in formal economic training, made it harder for him to connect all the dots. His own verdict on the failings of QE should have driven him to the next stage of analysis and prompted him to ask certain key questions.

Why did banks “only issu[e] fewer and fewer loans”? After all, this is why QE stimulated Wall Street but not Main Street; monetary policy normally provides economic stimulus by inducing loans to businesses and (secondarily) consumers, but in this case those loans were conspicuous by their absence. The answer is that the Fed deliberately arranged to allow interest payments on excess reserves it held for its member banks. Instead of making risky loans, banks could make a riskless profit by holding excess reserves. This unprecedented state of affairs was deliberately stage-managed by the Fed.

Why has the Fed been so indifferent to the net effects of its actions, instead of “worry[ing] obsessively about the costs versus the benefits”? The answer is that the Fed has been lying to the public, to Congress and conceivably even to the Obama Administration about its goals. The purpose of its actions has not been to stimulate the economy, but rather to keep it comatose (for “its” own good) while the Fed artificially resuscitates the balance sheets of banks.

Why did the Fed suddenly start buying mortgage-backed securities after “never [buying] one mortgage bond…in its almost 100-year history”? Bank portfolios (more particularly, portfolios of big banks) have been stuffed to the gills with these mortgage-backed securities, whose drastic fall in value during the financial crisis threatened the banks with insolvency. By buying mortgage-backed securities like they were going out of style, the Fed increases the demand for those securities. This drives up their price. This acts as artificial respiration to bank balance sheets, just as Andrew Huszar relates in his op-ed.

The resume of Fed Chairman Ben Bernanke is dotted with articles extolling the role played by banks as vital sources of credit to business. Presumably, this – rather than pure cronyism, as vaguely hinted by Huszar – explains Bernanke’s obsession with protecting banks. (It was Bernanke, acting with the Treasury Secretary, who persuaded Congress to pass the enormous bailout legislation in late 2008.)

Why has “the Fed’s independence [been] eroding”? There is room for doubt about Bernanke’s motivations in holding both short-term and long-term interest rates at unprecedentedly low levels. These low interest rates have enabled the Treasury to finance trillions of dollars in new debt and roll over trillions more in existing debt at low rates. At the above-normal interest rates that would normally prevail in our circumstances, the debt service would devour most of the federal budget. Thus, Bernanke is carrying water for the Treasury. Reservoirs of water.

Clearly, Huszar has left out more than he has included in his denunciation of QE. Yet he has still been savaged by the mainstream press for his presumption. This speaks volumes about the tacit gag order that has muffled criticism of the Administration’s economic policies.

It’s About Time Somebody Started Yellin’ About Yellen

Kevin Warsh was the youngest man ever to serve as a member of the Federal Reserve Board of Governors when he took office in 2006. He earned a favorable reputation in that capacity until he resigned in 2011. In “Finding Out Where Janet Yellen Stands” (The Wall Street Journal, 11/13/2013), Warsh digs deeper into the views of the new Federal Reserve Board Chairman than the questions on everybody’s lips: “When will ‘tapering’ of the QE program begin? and “How long will the period of ultra-low interest rates last?” He sets out to “highlight – then question – some of the prevailing wisdom at the basis of current Fed policy.”

Supporters of QE have pretended that quantitative easing is “nothing but the normal conduct of monetary policy at the zero-lower-bound of interest rates.” Warsh rightly declares this to be hogwash. While central banks have traditionally lowered short-term interest rates to stimulate investment, “the purchase of long-term assets from the U.S. Treasury to achieve negative real interest rates is extraordinary, an unprecedented change in practice… The Fed is directly influencing the price of long-term Treasurys – the most important asset in the world, the predicate from which virtually all investment decisions are judged.”

Since the 1950s, modern financial theory as taught in orthodox textbooks has treated long-term U.S. government bonds as the archetypal “riskless asset.” This provides a benchmark for one end of the risk spectrum, a vital basis for comparison that is used by investment professionals and forensic economists in court testimony. Or rather, all this used to be true before Ben Bernanke unleashed ZIRP (the Zero Interest Rate Policy) on the world. Now all the finance textbooks will have to be rewritten. Expert witnesses will have to find a new benchmark around which to structure their calculations.

Worst of all, the world’s investors are denied a source of riskless fixed income. They can still purchase U.S. Treasurys, of course, but these are no longer the same asset that they knew and loved for decades. Now the risk of default must be factored in, just as it is for the bonds of a banana republic. Now the effects of inflation must be factored in to its price. The effect of this transformation on the world economy is incalculably, unfavorably large.

Ben Bernanke has repeatedly maintained that the U.S. economy would benefit from a higher rate of inflation. Or, as Warsh puts it, that “the absence of higher inflation is sufficient license” for the QE program. Once again, Warsh begs to differ. Here, he takes issue with Bernanke’s critics as much as with Bernanke himself. “The most pronounced risk of QE is not an outbreak of hyperinflation,” Warsh contends. “Rather, long periods of free money and subsidized credit are associated with significant capital misallocation and malinvestment – which do not augur well for long-term growth or financial stability.”

Déjà Va-Va-Vuum

Of all the hopeful signs to recently emerge, this is the most startling and portentous. For centuries – at least two centuries before John Maynard Keynes wrote his General Theory and in the years since – the most important effect of money on economic activity was thought to be on the general level of prices; i.e., on inflation. Now Warsh is breaking with this time-honored tradition. In so doing, he is paying long-overdue homage to the only coherent business-cycle theory developed by economists.

In the early 1930s, F.A. Hayek formulated a business-cycle theory that temporarily vied with the monetary theory of John Maynard Keynes for supremacy among the world’s economists. Hayek’s theory was built around the elements stressed by Warsh – capital misallocation and malinvestment caused by central-bank manipulation of the money supply and interest rates. In spite of Hayek’s prediction of the Great Depression in 1929 and of the failure of the Soviet economy in the 1930s, Hayek’s business-cycle theory was ridiculed by Keynes and his acolytes. The publication of Keynes’ General Theory in 1936 relegated Hayek to obscurity in his chosen profession. Hayek subsequently regained worldwide fame with his book The Road to Serfdom in 1944 and even won the Nobel Prize in economics in 1974. Yet his business-cycle theory has survived only among the cult of Austrian-school economists that stubbornly refused to die out even as Keynesian economics took over the profession.

When Keynesian theory was repudiated by the profession in the late 1970s and 80s, the Austrian school remained underground. The study of capital theory and the concept of capital misallocation had gone out of favor in the 1930s and were ignored by the economics profession in favor of the less-complex modern Quantity Theory developed by Milton Friedman and his followers. Alas, monetarism went into eclipse in the 80s and 90s and macroeconomists drifted back towards a newer, vaguer version of Keynesianism.

The Great Financial Crisis of 2008, the subsequent Great Recession and our current Great Stagnation have made it clear that economists are clueless. In effect, there is no true Macroeconomic theory. Warsh’s use of the terms “capital misallocation” and “malinvestment” may be the first time since the 1930s that these Hayekian terms have received favorable mention from a prominent figure in the economic Establishment. (In addition to his past service as a Fed Governor, Warsh also served on the National Economic Council during the Bush Administration.)

For decades, graduate students in Macroeconomics have been taught that the only purpose to stimulative economic policies by government was to speed up the return to full employment when recession strikes. The old Keynesian claims that capitalist economies could not achieve full employment without government deficit spending or money printing were discredited long ago. But this argument in favor of artificial stimulus has itself now been discredited by events, not only in the U.S. and Europe but also in Japan. Not only that, the crisis and recession proceeded along lines closely following those predicted by Hayek – lavish credit creation fueled by artificially low interest rates long maintained by government central banks, coupled with international transmission of capital misallocation by flexible exchange rates. It is long past time for the economics profession to wrench its gaze away from the failed nostrums of Keynes and redirect its attention to an actual theory of business cycles with a demonstrated history of success. Warsh has taken the key first step in that direction.

The Rest of the Story

When a central bank deliberately sets out to debase a national currency, the shock waves from its actions reverberate throughout the national economy. When the economy is the world’s productive engine, those waves resound around the globe. Warsh patiently dissects current Fed policy piece by piece.

To the oft-repeated defense that the Fed is merely in charge of monetary policy, Warsh correctly terms the central bank the “default provider of aggregate demand.” In effect, the Fed has used its statutory mandate to promote high levels of employment as justification for assuming the entire burden of economic policy. This flies in the face of even orthodox, mainstream Keynesian economics, which sees fiscal and monetary policies acting in concert.

The United States is “the linchpin in the international global economy.” When the Fed adopts extremely loose monetary policy, this places foreign governments in the untenable position of having either to emulate our monetary ease or to watch their firms lose market share and employment to U.S. firms. Not surprisingly, politics pulls them in the former direction and this tends to stoke global inflationary pressures. If the U.S. dollar should depreciate greatly, its status as the world’s vehicle currency for international trade would be threatened. Not only would worldwide inflation imperil the solidity of world trade, but the U.S. would lose the privilege of seigniorage, the ability to run continual trade deficits owing to the world’s willingness to hold American dollars in lieu of using them to purchase goods and services.

The Fed has made much of its supposed fidelity to “forward guidance” and “transparency,” principles intended to allow the public to anticipate its future actions. Warsh observes that its actions have been anything but transparent and its policy hints anything but accurate. Instead of giving lip service to these cosmetic concepts, Warsh advises, the Fed should simply devote its energies to following correct policies. Then the need for advance warning would not be so urgent.

Under these circumstances, it is hardly surprising that we have so little confidence in the Fed’s ability to “drain excess liquidity” from the markets. We are not likely to give way in awed admiration of the Fed’s virtuosity in monetary engineering when its pronouncements over the past five years have varied from cryptic to highly unsound and its predictions have all gone wrong.

Is the Tide Turning?

To a drowning man, any sign that the waters are receding seems like a godsend. These articles appear promising not only because they openly criticize the failed economic policies of the Fed and (by extension) the Obama Administration, but because they dare to suggest that The Fed’s attempt to portray its actions as merely conventional wisdom is utterly bogus. Moreover, they imply or (in Kevin Warsh’s case) very nearly state that it is time to reevaluate the foundations of Macroeconomics itself.

Is the tide turning? Maybe or maybe not, but at last we can poke our heads above water for a lungful of oxygen. And the fresh air is intoxicating.

DRI-250 for week of 1-27-13: What Are the Lessons of Econometrics?

An Access Advertising EconBrief:

What Are the Lessons of Econometrics?

Recently, Federal Reserve official Janet Yellen earned attention with a speech in which she justified monetary easing by citing the Fed’s use of a new “macroeconometric model” of the economy. The weight of the term seemed to give it rhetorical heft, as if the combination of macroeconomics and econometrics produced a synergy that each one lacked individually. Does econometrics hold the key to the mysteries of optimal macroeconomic policy? If so, why are we only now finding that out? And, more broadly, is economics really the quantitative science it often pretends to be?


As practiced for roughly eight decades, econometrics combines the knowledge of three fields – economics, mathematics and statistics. Economics develops the pure logic of human choice that gives logical structure to our quantitative investigations into human behavior. Mathematics determines the form in which economic principles are expressed for purposes of statistical analysis. Statistics allows for the systematic processing and analysis of sample data organized into meaningful form using the principles of economics and mathematics.

Suppose we decide to study the market for production and consumption of corn in the U.S. Economics tells us that the principles of supply and demand govern production and consumption. It further tells us that the price of corn will gravitate toward the point at which the aggregate amount of corn that U.S. farmers wish to produce will equal the aggregate amount that U.S. consumers wish to consume and store for future use.

Mathematics advises us to portray this relationship between supply and demand by expressing both as mathematical equations. That is, both supply and demand will be expressed as mathematical functions of relevant variables. The orthodox formulation treats the price of corn as the independent variable and the quantity of corn supplied and demanded, respectively, as the dependent variable of each equation. Other variables, called parameters, are included in the equations as well, but isolated from price in their effects on quantity. Finally, our model of the corn market will stipulate that the two equations will produce an equal quantity demanded and supplied of corn.

Statistics allows us to gather data on corn without having to compile every single scrap of information on every ear of corn produced during a particular year. Instead, sample data (probably provided by government bureaus) can be consulted and carefully processed using the principles of statistical inference.

In principle, this technique can derive equations for both the supply of corn and its demand. These equations can be used either to predict future corn harvests or to explain the behavior of corn markets in the past. For over a half-century, training in econometrics has been a mandatory part of postgraduate education in economics at nearly all American universities.

Does this procedure leave you scratching your head? In particular, are you moved to wonder why mathematics and simultaneous equations should intrude into the study of economics? Or have we outlined a beautiful case of interdisciplinary cooperation in science?

Historical Evolution

As it happens, the development of econometrics was partly owing to the collision of scientific research programs that evolved concurrently in similar directions. Economics has interacted with data virtually since its inception. In the 1600s, Sir William Petty utilized highly primitive forms of quantitative analysis in England to analyze subjects like taxation and trade. Adam Smith populated The Wealth of Nations with various homely numerical examples. In the early 19th century, a French economist named Cournot used mathematics to develop pathbreaking models of monopoly and oligopoly, which anticipated more famous work done many decades later.

A Swiss economist, Leon Walras, and an Italian, Enrico Barone, applied algebraic mathematics to economics by expressing economic relationships in the form of systems of simultaneous equations. They did not attempt to fill in the parametric coefficients of their economic variables with real numbers – in fact, they explicitly denied the possibility of doing so. Their intent was purely symbolic. In effect, they were saying: “Isn’t it remarkable how the relationships in an economic system resemble those in a mathematical system of simultaneous equations? Let’s pretend that an economy of people could be described and analyzed using algebraic mathematics as a tool – and then see what happens.”

At almost the same time (the early 1870s), the British economist William Stanley Jevons developed the principles of marginalism, which have been the cornerstone of economic logic ever since. Economic value is determined at the margin – which means that both producers and consumers gauge the effects of incremental changes in action. If the benefits of the action exceed the costs, they approve the action and take it. If the reverse holds, they spurn it. Their actions produce tendencies toward marginal equality of benefits and costs, similar in principle to the quantity supplied/quantity demanded equality cited above. Jevons thought it amazing that this incremental logic seemed to correspond so closely to the logic inherent in the differential calculus. So he developed his theory of consumer demand in mathematical terms, using calculus. (It is also fascinating that the Austrian simultaneous co-discoverer of marginalism, Carl Menger, refused to employ calculus in his formulations.)

By the early 1900s, the roots of mathematics in economics had taken root. Soon a British mathematician, Ronald Fisher, would modernize the science of statistics. It was only a matter of time until mathematical economists began using statistics to put numbers into the coefficient slots in their equations, which were previously occupied with algebraic letters serving as symbolic place-holders.

In 1932, economist and businessman Alfred Cowles endowed the Cowles Commission at the University of Chicago. The purpose of the Commission was to do economic research, but the research was targeted toward mathematics and economics. The original motto of the Commission was the same as that of the Econometric Society. It was taken from the words of the great physicist Lord Kelvin: “Science is measurement.”

Seldom have three words conveyed so much meaning. The implication was that economics was, or should strive to be, a “science” in exactly the same sense as physics, biology, chemistry and the rest of the hard physical sciences. The physical sciences did science by observing empirical regularities and expressing them mathematically. They tested their theories using controlled laboratory experiments. They were brilliantly successful. The progress of mankind can be traced by following the progression of their work.

In retrospect, it was probably inevitable that social sciences like economics should take this turn – that they should come to define their success, their very meaning, by the extent and degree of their emulation of the natural sciences. The Cowles Commission was the cutting edge of econometrics for the next 20 years, after which time its major focus shifted from empirical to theoretical economics – back to mathematical models of the economy using simultaneous equations. But by that time, econometrics had gained an impregnable beachhead in economics.

The Role of Econometrics

Great hopes were held out for econometrics. Of course, it was young as sciences go, but by careful study and endless trial and error, we would gradually get better and better at creating better economic models, choosing just the right mathematical forms and using exactly the right statistical techniques. Our forecasts would slowly, but surely, improve.

After all, we had a country full of universities whose economists had nothing better to do than monkey around with econometrics. They would submit their findings for review by their peers. The review would lead to revisions. The best studies would be published in the leading economics journals. At last, at long last, we would discover the empirical regularities of economics, the rules and truths that had remained hidden from us for centuries. The entire system of university tenure and promotion would be based on this process, leading to the notorious maxim “publish or perish.” Success would be tied to the value of government research grants acquired to do this research. The brightest young minds would succeed and climb the ladder of university success. They would teach in graduate school. A virtuous cycle of success would produce more learning, better economics, better econometrics, better models, better predictions, more economic prosperity in society, better education for undergraduates and graduate students alike and a better life for all.

As it turned out, none of these hopes have been fulfilled.

Well, that’s not entirely accurate. A system was created that has ruled academic life for decades and, incredibly, shows no sign of slowing down. Young economists are taught econometrics, after a fashion. They dutifully graduate and scurry to a university where they begin the race for tenure. Like workers in a sausage factory, they churn out empirical output that is read by nobody excepting a few of their colleagues. The output then dies an unlamented death in the graveyard of academic journals. The academic system has benefitted from econometrics and continues to do so. It is difficult to imagine this system flourishing in its absence.

Meanwhile, back at the ranch of reality, the usefulness of econometrics to the rest of the world asymptotically approaches zero. Periodically, well-known economists like Edmond Malinvaud and Carl Christ review the history of econometrics and the Cowles Commission. They are laudatory. They praise the Commission’s work and the output of econometricians. But they say nothing about empirical regularities uncovered or benefits to society at large. Instead, they gauge the benefits of econometrics entirely from the volume of studies done and published in professional journals and the effort expended by generations of economists. In so doing, they violate the very standards of their profession, which dictates that the value of output is judged by its consumers, not by its producers, and that value is determined by price in a marketplace rather than by weight on a figurative scale.

It is considered a truism within the economics profession that no theoretical dispute was ever settled by econometrics – that is a reflection of how little trust economists place in it behind closed doors. In practice, economists put their trust in theory and choose their theories on the basis of their political leanings and emotional predilections.

We now know, as surely as we can know anything in life, that we cannot predict the future using econometrics. As Donald (now Deirdre) McCloskey once put it, you can figure this out yourself without even going to graduate school. All you have to do is figuratively ask an econometrician the “American question:” “If you’re so smart, why ain’t you rich?” Accurate predictions would yield untold riches to the predictors, so the absence of great wealth is the surest index of the poverty of econometrics.

Decades of econometric research have yielded no empirical regularities in economics. Not one. No equivalent to Einstein’s equation for energy or the Law of Falling Bodies.

It is true that economists working for private business sometimes generate predictions about individual markets using what appears to be econometrics. But this is deceptive. The best predictions are usually obtained by techniques called “data mining,” that violate the basic precepts of econometrics. The economists are not interested in doing good econometrics or statistics – just in getting a prediction with some semblance of accuracy. Throwing every scrap of data they can get their hands on into the statistical pot and cooking up a predictive result doesn’t tell you much about which variables are the most important or the degree of independent influence each has on the outcome. But the only hope for predictive success may be in assuming that the future is an approximation of the past, in which case the stew pot may cook up a palatable result.

The Great “Statistical Significance” Scandal

In the science of medicine, doctors are sworn to obey the dictum of Hippocrates: “First, do no harm.” For over twenty years, economists Deirdre McCloskey and Stephen Ziliak have preached this lesson to their colleagues in the social sciences. The use of tests of “statistical significance” as a criterion of value was rampant by the 1980s, when the two began their crusade against its misuse. For, as they pointed out, the term is misunderstood not only by the general public but even by the professionals who employ it.

When a variable is found statistically significant, this does not constitute an endorsement of its quantitative importance. It merely indicates the likelihood that the sample upon which the test was conducted was, indeed, randomly chosen according to the canons of statistical inference. That information is certainly useful. But it is not the summum bonum of econometrics. What we usually want to know is what McCloskey and Ziliak refer to as the “oomph” of a variable (or a model in its totality) – how much quantitative effect it has on the thing it affects.

The two modern-day Diogenes conducted two studies of the econometric articles published in the American Economic Review, the leading professional journal. In the 1980s, most of the authors erred in their use and interpretation of the concept of statistical significance. In the 1990s, after McCloskey and Ziliak began writing and speaking out on the problem, the ratio of mistakes increased. Among the culprits were some of the profession’s most distinguished names, including several Nobel Prize winners. When it comes to statistics and econometrics, it seems, economists literally do not know what they are doing.

According to McCloskey – who is herself a practitioner and believer in econometrics – virtually all the empirical work done in econometrics to date will have to be redone. Most of the vast storehouse of econometric work done since the 1930s is worthless.

The Difference Between the Social Sciences and the Natural Sciences

Statistics has been proven to work well in certain contexts. The classical theory of relative-frequency probability is clearly valid, for example; if it weren’t, Las Vegas would have gone out of business long ago. Those who apply statistics properly, like W. Edward Deming, have used it with tremendous success in practical applications. Deming’s legendary methods of quality control involving sampling and testing have been validated time and again across time and cultures.

When econometrics was born, a small band of critics protested its use on the grounds that the phenomena being studies in the social sciences were not amenable to statistical inference. They do not involve replicative, repetitive events that resemble coin flips or dice throws. Instead, they are unique events that involving different elements whose structures differ in innumerable ways. The number of variables involved usually differs between the physical and social sciences, being vastly larger when human beings are the phenomena under study. Moreover, the free will exerted by humans is different from unmotivated, instinctive, chemically or environmentally induced behavior found in nature. Free will can defy quantitative expression, whereas instinctive behavior may be much more tractable.

In retrospect, it now seems certain that those critics were right. Whatever the explanation, the social sciences in general and economics in particular resist the quantitative measurement techniques that took natural sciences to such heights.

The Nature of Valid Economic Prediction

We can draw certain quantitative conclusions on the basis of economic theory. The Law of Demand says that when the price of something rises, desired purchases of that thing will fall – other things equal. But it doesn’t say how much they’ll fall. And we know intuitively that, in real life, other things are never unchanged. Yet despite this severely limited quantitative content, there is no proposition in economic theory that has demonstrated more practical value.

Economists have long known that agriculture is destined to claim a smaller and smaller share of total national income as a nation gets wealthier. There is no way to predict the precise pattern of decrease, but we know that it will happen. Why? Agricultural goods are mostly either food or fiber. We realize instinctively that when our real incomes increase, we will purchase more food and more clothing – but not in proportion to the increase in income. That is, a 20% increase in real income will not motivate us to eat 20% more food – not even Diamond Jim Brady was that gluttonous. Similarly, increases in agricultural productivity will increase output and lower price over time. But a 20% decline in food prices will not call forth 20% more desired food purchases. Economists say that the demand for agricultural good is price- and income-inelastic.

These are the types of quantitative predictions economists can make with a clear conscience. They are couched in terms of “more” or “less,” not in terms of precise numerical predictions. They are what Nobel laureate F. A. Hayek called “pattern predictions.”

It is one of history’s great ironies that Hayek, an unrelenting critic of macroeconomics and foe of statistics and econometrics, nevertheless made some of the most prescient economic predictions of the 20th century. In 1929, Hayek predicted that the economic boom of the 1920s would soon end in economic contraction – which it did, with a vengeance. (Hayek’s mentor, Ludwig von Mises, went even further by refusing a prestigious appointment because he anticipated that “a great crash” was imminent.) In the 1930s, both Hayek and von Mises predicted the failure of the Soviet economy due to its lack of a functioning price system, particularly the absence of meaningful interest rates. That prediction, too, eventually bore fruit. In the 1950s, Hayek declared that Keynesian economic policies would produce accelerating inflation. Western industrial nations endured withering bouts of inflation beginning in the late 1960s and lasting for over a decade. Then Hayek broke with his fellow economists by insisting that this inflationary cycle could be broken, but only by drastically slowing the rate of monetary growth and enduring the resulting recession for as long as it lasted. Right again – and the recession was followed by two decades of prosperity that came to be known as the Great Moderation.

Ask the Fed

One of the tipoffs to the complicity of the mainstream press in the Obama administration’s policies is the fact that nobody has thought to ask Janet Yellen questions like this: “If your macroeconometric model is good enough for you to rely on it as a basic for a highly unconventional set of policies, why did it not predict the decline in Gross Domestic Product in fourth quarter 2012? Or if it did, why did the Fed keep that news a secret from the public?”

The press doesn’t ask those questions. Perhaps they are cowed by the subject of “macroeconometrics.” In fact, macroeconomics and econometrics are the two biggest failures of contemporary economics. And there are those who would substitute the word “frauds” for “failures.” Unless you take the position that combining two failures rates to produce a success, there is no reason to expect anything valuable from macroeconometrics.