DRI-180 for week of 2-22-15: Will Macroeconomics Survive the Aftershocks of the Great Recession?

An Access Advertising EconBrief:

 Will Macroeconomics Survive the Aftershocks of the Great Recession?

Today there are courses on Macroeconomics in the Economics departments of every American university. It was not ever thus. Macroeconomics was born in the agony of the Great Depression. Before that, economists worked with aggregative concepts like the Quantity Theory of Money, but there was no holistic study or theory of economic aggregates. It was not clear why there should be, since all economic action originated in the minds of individual human beings and statistical data have no life of their own apart from the people embodied within them.

The Depression focused public attention on economics and economists, who were previously obscure. People wanted to know what went wrong and how to recover from it. When reigning economic theory proved unavailing and the Depression resisted the frantic resuscitative efforts of government, the economics profession threw professional decorum to the winds and started chasing after any explanation that seemed either plausible or palatable. The winner in this guess-the-theory sweepstakes was John Maynard Keynes, whose General Theory of Employment Interest and Money offered an apparent answer to the more important of the two big questions; namely, how do we get out of this fix?

Keynes’ prescription was deficit spending by government – the more, the better until the cloud of Depression lifted. Keynes won the professional competition, but World War II made his victory anticlimactic; when the smoke cleared after the war and normal life resumed, the Depression was over. But the economics profession had taken the bit between its teeth. It had organized a new system of national income accounts around the aggregative theory of income and employment advanced by Keynes and his burgeoning school of disciples. Professional journals bulged with articles on Keynesian economics, triggering a forty-year odyssey of research.

Economics split in two. Formerly, economics studied individual economic entities like the consumer, the producer and the worker. Now the theory of consumer demand, the theory of the firm and the theory of input supply and demand were pigeonholed under the study of Microeconomics. Monetary theory, which had formerly sought to express the barter theory of pure exchange in the language of indirect monetary exchange, was now converted into Macroeconomics – the study of national economic aggregates using the language of Keynesian economics.

Keynesian economics fell into disrepute in the early 1980s and textbooks were revised accordingly. But its skeletal structure and aggregative logic still survives, as does the Micro/Macro split.

The Great Recession and its stubbornly lingering aftermath have midwived a lengthening string of books and articles purporting to explain what went wrong and how to prevent it from happening again. In that respect, we are witnessing a replay – or perhaps “remake” might be more accurate – of the founding story of Macroeconomics. This time, though, we are heading for an entirely different ending. Whereas the Great Depression fused the study of Macroeconomics around the core of Keynesian theory, the Great Recession has fragmented the subject almost to the limits of recognition.

The Fragmentation of Macroeconomics

Of course, the fragmentation process began even earlier, with the organized opposition to Keynesian theory. That began in the 1950s with the rise to prominence of Milton Friedman. Friedman’s revival of the Quantity Theory of Money as a Monetarist theory that competed with Keynesian economics made him famous. He vied with John Kenneth Galbraith in public recognition and popularity and nearly single-handedly restored free-market economics to respectability in America. His promotion of floating exchange rates for international currencies and the permanent-income theory of consumption established an academic reputation that eventually earned him the Nobel Prize.

In the 1970s, Friedman’s Monetarism was joined by the Rational Expectations theory of Robert Lucas and Thomas Sargent, each of whom later earned the Nobel Prize. In a sense, Rational Expectations competed with both Keynesian economics and Monetarism, since both previous theories shared a common analytical framework. Rational Expectations theory denied that policymakers could systematically trick the public by printing money and inflating the currency as Keynes had advocated in a famous passage of the General Theory.

Keynesian economics fell, but rose again in the form of the New Keynesian Economics. This version was created to purge the failings of its ancestor, so it has more in common with Monetarism and Rational Expectations than it does with its namesake. But its most striking feature is its ecumenism. Perusing a list of economists who style themselves New Keynesians is an exercise in cognitive dissonance. The members have as little in common as did the members of opposing schools of thought in the 1970s.

Paul Krugman is an unreconstructed, big-spending Keynesian and defender of big government. N. Gregory Mankiw could just as easily be called a “New Monetarist.” John Taylor is the inventor of the “Taylor Rule,” the successor to Milton Friedman’s “monetary rule” that tied the hands of Federal Reserve policymakers by prescribing fixed annual increases in the quantity of money. Stanley Fischer is a famous central banker and textbook author who combined Rational Expectations theory with New Keynesian economics. David and Christina Romer are a husband and wife who frequently form a research team. As individuals, they have sometimes expressed skepticism about the effects of activist Keynesian policy measures that other New Keynesians like Krugman approve, such as tax increases. Indeed, Christina once authored a well-known paper doubting the efficacy of post-World War II federal-government macroeconomic policy intervention. (This viewpoint was nowhere to be heard, though, when she became head of President Obama’s Council of Economic Advisors.) About the only thing uniting these economists is a belief that government should take some active measures on a regular basis to improve economic outcomes. But they are dramatically, not to say violently, at odds over exactly what those measures should be.

Free-Market Economists and Macroeconomics

Historically, free-market economists have always been outliers. Unlike everybody else, they never accepted Keynes, nor did they accept his aggregative methods. Indeed, the great free-market economist F.A. Hayek was the principal rival of Keynes during the 1930s. Hayek was the only economist to offer a coherent explanatory theory of recessions and depressions. While Keynes offered an active measure to cure the Great Depression without pretending to explain why it had occurred, Hayek did just the opposite. He provided a logical explanation for the onset of the Depression, but maintained that – like the common cold – the Depression could not be cured, only endured. More specifically, Hayek insisted that active fiscal and monetary measures would merely make things worse.

This kind of stubborn independence persisted throughout succeeding decades. At least superficially, it remains intact to this day. While unreconstructed Keynesian like Joseph Stiglitz ascribe the Great Recession to “deregulation” that allowed the commercial and shadow-banking sectors to run amok, free-market economists demur by noting the nearly complete absence of any deregulatory initiatives other than the comparatively trivial Gramm-Leach-Bliley Act of 1999. But upon closer inspection of the numerous free-market exegeses of the financial crisis and Great Recession, it emerges that free-market theorists have broken ranks. They have become individualists analytically as well as temperamentally. They now appear every bit as fragmented as the rest of the economics profession.

A recent book (Boom and Bust Banking: The Causes and Cures of the Great Recession, edited and Introduced by David Beckworth; Independent Institute, 2012) collected the views of prominent free-market economists on the Great Recession and financial crisis. The Introduction, by one of the leading exponents of the school, conveys the impression that all of the contributors are on the same page. This is true in only one respect: they all disapprove of actions taken by the Federal Reserve prior to and during the crisis. Beyond that, however, they are almost as diverse in their views as are the New Keynesians. This movement toward ideological and analytical atomism is unprecedented in modern economics.

Every Man Is an Island

Lawrence White is a longtime Austrian economist whose specialty is money and banking. Along with colleague George Selgin, he is a leading proponent of free banking, the advocacy of free competitive banking over the heavily regulated and protected banking that exists now.

White attributes the Great Recession and financial crisis not to “laissez faire or deregulation” but rather “the interaction of an unanchored government fiat monetary system with a perversely regulated financial system.” The Federal Reserve’s cheap credit policy “…kept interest rates too low for too long… in 2001-2006″ to create the “…housing boom and bust cycle of “2001-2007.” Real interest rates (that is, nominal rates minus the rate of inflation) were negative from 2002-2005. Nominal spending was artificially high in 1998-2000, leading to a boom that went bust in 2001. This pattern was repeated in 2002-2004, leading eventually to recession in 2007-2009. The second time around, the bubble created by artificially high demand disproportionately surrounded the housing sector. Housing prices shot up during 2001-2006, leveled off, then crashed. The resulting problems were amplified by political and regulatory mistakes that produced bailouts for financial firms and dilution of credit standards for house buyers. Overall, though, the Fed’s actions were the proximate cause of disaster. Thus, the Fed has worsened the chronic currency problems it was created to cure.

White believes that we need an alternative set of monetary institutions, with free banking leading the list.

David Beckworth is a New Monetarist. In one of his two contributions, he wonders why Fed policy was too loose for too long. He finds the answer in the Fed’s mishandling of the U.S.’s “productivity boom” from 2002-2004. During those years, U.S. total factor productivity rose by an average annual rate of about 2.5%. This compares with an average rate of 0.9% over the previous 30 years in the U.S. This sounds like good economic news if anything ever did. Yet, incredible as it seems, Federal Reserve policy turned it into bad news.

The Fed was – and still is – dead set on avoiding “deflation” at all costs. The quotation marks reflect uncertainty about just what constitutes the sort of falling overall price level we are, or should be, trying to avoid. In practice, the Fed and other central banks treat the prospect of even a tiny overall fall in prices as a catastrophe of the first order. Supposedly, deflation fatally strains borrowers, who are thereby forced to repay debt with dollars of successively greater value. In any case, a general increase in productivity causes costs to fall and, all else equal, will tend to cause prices to fall, too. Rather than allow this to happen, the Fed increases the money supply to lower interest rates, increase inflation and raise the general level of prices. And that is what happened in 2002-2004 to offset the productivity boom. This tended to create an artificial increase in the general level of demand. The Fed was assuming that the falling price level was the precursor of a decrease in aggregate demand, lower investment, lower real income, less employment and more unemployment. It believed that its actions were needed to prevent a recession. Wrong; the Fed generated an artificial increase in aggregate demand and an increase in inflation instead. Then, later on, the boom turned into a bust.

Beckworth believes that the Fed makes so many mistakes because it lacks a proper source of feedback from markets. He advocates targeting of Nominal Gross Domestic Product (NGDP) by the Fed. NGDP is simply the nominal level of spending in the economy, which Beckworth believes should be held to as constant a level as possible for optimal results.

Beckworth’s ideas are elaborated further by Scott Sumner, the leading New Monetarist who writes one of the most widely followed economics blogs in the world. Sumner proposes the creation of a NGDP futures market to give the Fed market feedback on the level of NGDP. Sumner finds the cause of the Great Recession and financial crisis in “misdiagnosis by macroeconomists,” not mistakes made by bankers and regulators. Indeed, he exhibits touching faith in the willingness of central bankers to strive for favorable economic outcomes – if only they would see the light! Alas, macroeconomics is ruled by “superstitions, including the view that good economists are those that can predict the business cycle, or asset-market crashes.” Stop relying on policymakers being smarter than markets, Sumner pleads; instead, “restructure macro around market expectations.”

Hedge-fund manager and finance theorist Diego Espinosa believes that the Fed created the housing bubble and ensuing financial crisis by creating an environment that not only allowed, but actively encouraged, traders to pursue a “carry trade” in mortgage securities. Carry trading implies the notion of borrowing in a short-term financial asset and lending long-term. Leverage is used to juice up the return since the spread earned by the trader is usually small. The mortgage securities used were provided by investment-banking houses rather than commercial banks because investment bankers had the necessary experience and expertise to “securitize” mortgages in packaged form, supervise their rating and distribute them widely. The distribution in tranched form from highest to lowest rated allowed the greatest possible distribution among all risk classes of buyers. After all, with small spreads, there were only two ways to increase returns – ever-greater leverage and ever-greater volume. Volume was further enhanced by diminution of credit standards in every way: lower down payments, higher loan-to-value standards, lower income requirements, lower consumer-credit-rating standards, low or no verification of consumer application statements.

This recipe – increasing leverage, increasing volume and wide distribution of indebtedness, abandonment of any semblance of credit standards – was predestined to end in disaster. So why did traders embrace it so fervently? Espinosa confides that the mastermind of this scheme was the Federal Reserve. In 2002, the Fed announced a policy of maintaining a low Fed funds (overnight) rate long after a recession. It promised to raise rates only slowly, in small increments, to give market participants time to unwind positions taken in response to this policy. Thus, traders believed that “the fix was in” – they couldn’t lose the carry-trade game in the usual way, by being caught short when interest rates rose. Instead, they ended up losing in ways they didn’t anticipate. When the crisis arrived, their ability to borrow short was impaired and the securities they owned became illiquid and/or worthless.

Of course, the Fed was motivated by its own political aims. The Fed was bound and determined to prevent deflation in the worse way. That’s exactly what it did – by leading the country into a huge financial crisis and recession. It provoked traders into demanding its short-term funds and buying mortgage securities, thereby achieving both its policy aims and the administration’s political aims simultaneously.

Espinosa knows that the Fed’s actions were utterly unprincipled. But his only policy recommendation is that the Fed should “recognize the limits of its own powers.”

Jeffrey Rogers Hummel is an economic historian who has evolved into a leading monetary theorist. His study of Ben Bernanke completely overturns the mainstream view of Bernanke’s tenure as Federal Reserve Chairman. Bernanke is famous for his homage to Milton Friedman, so much so that he gained the sobriquet “Helicopter Ben” in reference to Friedman’s insistence that the Fed could drop money from helicopters if needed. But Hummel shows that Bernanke actually repudiated Friedman’s legacy by bailing out particular banks while refusing liquidity for the banking system in general. Bernanke epitomized the FDR prototype of a leader: a whirlwind of action, always willing to experiment with other people’s money and welfare and confident that his good intentions would justify any result. He centralized control of the financial system within the Fed, thereby earning Hummel’s title of “central planner in chief.” Bernanke completed the transition of the Fed from the role it played in its first decades, a banker’s bank and custodian of the money supply, to that of financial central planner for the economy and even for the world at large.

Lawrence Kotlikoff is an old-line Chicago economist who has held various academic, research and journalistic posts. His contribution reflects his heritage. He recognizes the disastrous role played by fractional-reserve banking in the economic history of the U.S. and the world. Why do we continue to put up with banking panics, recessions and the accompanying dislocations, he complains? There is only one way out of this box. We must reform the practice of banking itself.

“The economic moral is simple. If you want markets to function, don’t let critical market-makers… gamble with their businesses. Apply the moral to banks and the regulatory prescription is clear. Don’t let banks take risky positions. Make banks stick to their two critical functions – mediating the payments system and connecting lenders to borrowers.”

Kotlikoff’s solution is limited-purpose banking, a proposal with roots in the old “Chicago Plan” of 1933. By law, banks would be limited to two types of activity: a plain vanilla banking operation that operated as a mutual fund with an interest-paying checking-account service only; and another, entirely separate, operation that operates a mutual fund offering opportunities in bonds, mortgages, stocks, private equity, real estate, and other financial securities.” The cash mutual fund would hold 100% reserves and would thus require no taxpayer protections of any kind, including government deposit insurance. In the investment form of banking, banks would initiate but not hold their own loans. A Federal Financial Authority would hold the loans and audit all books. According to Kotlikoff, “never again would a Bernie Madoff be free to custody his own accounts; i.e., to lie about the actual investments being made with investor money.”

George Selgin is one of the modern proponents of free banking. Implicitly, he rebuts Kotlikoff by asking the question: Suppose central banks and banking regulation did not exist; what arrangements would take their place? “Banks would issue banknotes that would be backed by some kind of reserve,” which could be specie or the U.S. monetary base. These banknotes would circulate and clear as checks do today. Interbank clearing and reserve transfers would stem overissue of banknotes by any individual bank. This system would handle the level of money demand by the public. It would provide an automatic system of equating the supply of money to the amount of money demanded. That is to say, it would automatically solve the central problem of monetary theory. In turn, this would automatically stabilize the total level of nominal dollar spending. Presto! At a stroke, the key problems of monetary theory, banking and Macroeconomics are solved.

Selgin then explains at length how the history of central banking reveals the inherently destabilizing nature of that institution. Not only does a central bank such as the Federal Reserve lack any automatic feedback system allowing it to equate the quantity of money demanded and supplied – this is in and of itself a fatal flaw of central banking - but central banks are also inherently compromised by their political connections. Originally, central banks were created to pander to the financial needs of the sovereign. Thus, the needs of the government took precedence over the needs of the public at large. Even today, we see the Fed catering to the financing needs of the government by holding interest rates artificially low for years to allow the federal government to finance its outsize public debt.

A Mass of Contradictions 

A quick perusal uncovers the mass of contradictions among the free market contributors. Kotlikoff and Selgin are poles apart in their insistence on a rigid, government-controlled approach to banking (Kotlikoff) as opposed to a free-market-feedback approach (Selgin). Sumner and Beckworth are both prominent New Monetarists; both favor the latest Macroeconomic-stabilization-policy gimmick, Nominal Gross Domestic Product stabilization. So they have to be in agreement, right? Wrong. Sumner sees falling prices in the Depression as a sign of “deflation” that the Fed should have corrected with loose monetary policy. But Beckworth regards 2002-2004 as a “productivity boom” for the U.S., not a time of disastrous deflation. Well, the 1920s saw a similar boom in productivity, with similar effects on prices. Presumably, Beckworth would regard them similarly – which puts him squarely at odds with Sumner.

White and Selgin are both determined to put markets in control and depose the Fed. Sumner is equally keen to utilize the principle of market feedback because macroeconomists have disastrously misdiagnosed the ills of the economy. Moreover, he believes passionately that policymakers are not smarter than the market. But he proposes to take his carefully cultivated, pet panacea of NGDP stabilization and put it in the hands of the Fed – the very policymakers and macroeconomists he bad-mouths! And those same people are bossed around by… politicians! That puts Sumner somewhere on the opposite side of the world from White and Selgin.

As for Espinosa and Hummel… well, their analysis may be the most detailed, penetrating and acute of any being offered on the market today. But when it comes drawing implications from their conclusions, they opt out.

Increasingly, the operative principle of Macroeconomics is becoming “to each his own (theory).”

DRI-180 for week of 2-15-15: The Midnight Ride of the Interest-Rate Alarmists

An Access Advertising EconBrief:

The Midnight Ride of the Interest-Rate Alarmists

In every Middlesex village and farm – and these days, the word “Middlesex” carries a decided double meaning – the alarm is being sounded. Interest rates will rise. The only question is when.

For six years, the question has been “if,” not “when.” At first, interest rates were held down by “stimulus” – the combination of fiscal and monetary policy embodied in the multi-billion (or trillion, depending on how one counts) dollar program enacted in the early days of the Obama administration in 2009. Then, when the “zero lower bound” beckoned, the QE series of quantitative expansions in monetary “stimulus” helped enforce a continuing ZIRP (Zero Interest Rate Policy).

Now, we have reached a point at which some middle-school youths have no memory of what a real interest rate looked or felt like. And quite a few adults in financial and policymaking circles have no desire to relive their old memories, either. They have mounted up, a la Paul Revere, to cry “The rate hike is coming! The rate hike is coming!”

In a recent Wall Street Journal op-ed (“Why the Alarms About a Slight Rate Hike?” WSJ, 02/18/2015), author Omid Malekan quotes several of these alarmists. “Charles Evans, president of the Chicago Fed and a voting member of the board that determines rate policy, said last month that raising rates too soon would be a ‘catastrophe.’ Former CEO of General Electric Jack Welch, during a Feb. 4 interview on CNBC, called a possible spring rate hike ‘ludicrous.’ Billionaire investor Warren Buffett told Fox Business Network on the same day that he didn’t think a rate increase this year would be ‘feasible.'”

Malekan’s view of these modern-day midnight riders is droll. “Catastrophe. Ludicrous. Not feasible. Really?” For the previous five decades, Malekan notes, the benchmark overnight Fed funds rate averaged 5.7%, ranging from a high of 19% in the early 1980s down to 1% in the early 2000s. But for most of the five-decade reference period – including the Vietnam War, most of the Cold War, the stagflation of the 1960s and 70s and two serious recessions in the 70s and 80s – that 5.7% figure wasn’t far off the mark. But “since December 2008 the fed-funds rate has been kept close to zero.”

And what would the Fed’s proposed interest-rate hike, anathematized as unthinkable by its critics, do? It “would take the fed-funds rate from near zero to about 0.25%, and no that isn’t a misplaced decimal point. We aren’t talking about 2.5%, which would still be less than half the 1954-2007 average. We are talking about 0.25%, which would mean the Fed’s monetary policy would be rolled back from full pedal-to-the-metal to a fraction above pedal-to-the-metal. On a historical chart of the fed-funds rate, the proposed hike would barely be visible to the naked eye. Does that sound like inviting catastrophe?”

The fact that Malekan can mine humor from ZIRP and QE is testimony to the human capacity for finding fun in the darkest of circumstances. After all, one of the most popular motion-picture comedies of all time poked fun at nuclear war and ended with the destruction of the planet. A rise of one-quarter basis point in interest is hardly that apocalyptic, so a little black humor isn’t out of place. But the underlying issues make this no laughing matter.

Hamlet or Waiting for Godot?

For free-market economists, the last six years have been a living nightmare. Like many nightmares, this one has been murky and hard to follow. It has many features of Shakespearian tragedy. The Fed often seems to be playing the role of Hamlet, as when it cannot make up its mind whether or when to raise interest rates. At other times, economic policy takes on the surrealism of a Samuel Johnson play. The QE sequence and the long wait for the return of normality to monetary policy casts the Open Market Committee as the characters from Waiting For Godot – waiting for someone or something they aren’t sure they know or want.

One of the alarmists cited above is actually an Open Market Committee member and Fed policymaker. This just adds to the atmosphere of surrealism surrounding economic policy. But it jibes with the ambivalent reactions that the Fed itself has displayed to its own rate-hike proposal.

The exasperation of Fed watchers is captured in another Wall Street Journal op-ed (“A Muddle of Mixed Messages From the Fed,” WSJ, 02/19/2015) by two members of the Shadow Open Market Committee, Charles W. Calomiris and Peter Ireland. The SOMC is a group of economic and finance professors whose avocation is criticizing the Fed’s monetary-policy actions.

These two men begin by noting that the conventional index of market expectations is the futures market. Interest-rate futures indicate that markets do not believe the Fed will follow through on its stated intention to raise interest rates discretely over the next two years, beginning in mid-2015. Instead, markets expect rates to rise more slowly beginning later this year. Why does this divergence exist? Because the Fed has been giving mixed signals; Fed leaders say one thing (“rates will rise beginning in June”) but hint otherwise (by implying in various forums that both labor markets in particular and the overall economy in general are still shaky). Market participants believe the hints that Janet Yellen and other Fed officials are dropping, not the official policy statements issuing from the agency.

The Fed is legendary for using language reminiscent of the Delphic Oracle as a means of preserving its policy flexibility. While this is politically and bureaucratically useful to the agency, it is economically harmful. If market participants plan for one type of monetary policy and interest-rate environment but later experience a different one, their plans will be adversely affected. The very essence and purpose of interest rates is to coordinate the plans of savers and investors over time, so this confusion cannot be a good thing.

Without saying it in so many words, the two authors also accuse the Fed of reverting to old-line Keynesian habits. This wouldn’t be surprising in view of Chairwoman Janet Yellen’s left-wing Keynesian ideological slant. The hoary Phillips Curve tradeoff between inflation and unemployment has apparently been resuscitated with the Fed’s pathological fear of deflation, insistence on a 2% annual rate of inflation as a positive goal and Ahab-like pursuit of the ever-receding goal of “full employment.” Calomiris and Ireland insist that falling oil prices are not something to be feared and cannot – in and of themselves – cause a deflationary Depression. Only a sudden and severe decline in the money supply can do that. In effect, they are invoking the spirit of Milton Friedman’s famous dictum, “Inflation is always and everywhere a monetary phenomenon” – only in reverse gear.

The Fed’s problem is that Keynesians like Yellen were trained to believe that the interest-rate hike they are now advertising will torpedo an economic expansion – and the existence of a current expansion is the ostensible justification for the interest-rate hike in the first place. As the two authors point out, “even with a hike beginning in midyear, interest rates would remain very low and still well below the inflation rate, implying a negative real interest rate. Prior rate hikes in similar circumstances in 1994 and 2004 did not throw the economy into recession.

Calomiris and Ireland also resurrect another Friedmanism – his famous reference to the “long and variable lags” with which changes in the monetary policy affect the economy. Since 2011, the broad measure of the money supply, M2, has increased at an annual rate of over 6%. The two men see the excess reserves of banks gradually being absorbed into the economy after long sitting idle on deposit at the Fed. This will eventually – sooner rather than later – ratchet the annual rate of inflation toward and above the Fed’s target rate of 2% and completely offset the downward price momentum created by the decline in oil prices. Why, they complain, doesn’t the Fed own up to this?

Thus, the Fed’s case in favor of its announced policy is vastly stronger than the Fed pretends. The Fed is acting as though it doesn’t believe in its own policy.

The Crowning Irony

As if all this weren’t enough to leave any sensible observer groggy, we are forced to acknowledge that the Fed’s critics – fans of interest-rate hikes who are itching to “get back to a normal monetary policy” – suffer from their own blind spot.

Ironically, Calomiris, Ireland and Malekan are so dumbfounded by the Fed’s progressive march away from monetary reality that they haven’t noticed how far into the swamp that march has taken us. Having marched in, we can’t just turn around and march back out again and expect that the exit will be as smooth as the entry.

Calomiris and Ireland cite the interest-rate hikes of 1994 and 2004 as precedent for the one upcoming in June. But the previous increases did not take place in an economy staggering under the public and private debt load we carry today. Malekan cites the quarter-basis-point increase derisively; who’s afraid of a big, bad quarter point, anyhow, he laughs? Hell, we used to live with real interest rates of 5.4% in the old days. So we did, but then the federal-government debt wasn’t $14 trillion, either. We weren’t forced to finance federal-government debt with short-term debt instruments to hold down the rate. If we had to pay even halfway realistic interest rates on our current debt, the federal-government budget would be eaten alive. Suddenly, the U.S. would become Europe – no, it would become Greece, facing a full-blown fiscal crisis that would instantly become a political crisis.

Oh. Well, then – maybe it’s right to be so cautious, after all. Come to think of it, maybe we shouldn’t increase rates at all. Maybe we’re just stuck. You know, life really isn’t so bad. After all, unemployment has declined to the neighborhood of 5%. The economy is growing – slowly, but it’s growing. Let’s just stay where we are, then. Why is the Fed even talking about increasing rates?

From Op-ed Page to Front Page

Let’s jump from the op-ed page of the Wall Street Journal to the front page. The headline for 02/19/2015 reads: “Borrowers Flock to Subprime Loans.” Uh-oh; déjà vu all over again. “Loans to consumers with low credit scores have reached the highest level since the start of the financial crisis, driven by a boom in car lending and a new crop of companies extending credit. Almost four of every 10 loans to autos, credit cards and personal borrowing in the U.S. went to subprime customers in the first 11 months of 2014,” based on data supplied by Equifax.

In other words, the ultra-low interest rates stage-managed by the Fed have paved the way for a new financial crisis. The lead-in to the article didn’t even mention student loans, probably because the category of “subprime” is not meaningful for that type of loan. The auto-loan, credit-card and personal-finance industries are different from real estate. Banks no longer face the same risk exposures as they did in the early years of this millennium. Various elements of this impending crisis differ from the mortgage-finance-dominated crisis that preceded it. To be sure, history does not repeat itself – but it does rhyme, in the words of one sage observer.

It has now penetrated even the thick skulls of Federal Reserve policymakers, though, that asset bubbles are not born spontaneously. They are generated by bad government policies, with interest-rate manipulation prominent among those. It cannot have escaped notice that fixed investment during the six years of ZIRP and QE has fallen to anemic levels. Apparently, it is not so much low interest rates that promote healthy levels of investment as real, genuine interest rates – that is, interest rates that actually reflect and coordinate the desires of savers and investors.

Savers are people who plan savings today and on an ongoing basis to provide for future consumption. Investors are people who plan investments today and on an ongoing basis to provide the future productive capacity that makes future consumption possible. Interest rates coordinate the activities of these two groups of market participants over differing future time periods. This serves to coordinate intertemporal production and consumption in a manner analogous to the way that the prices of goods and services coordinate production and consumption over short-term time periods. (In this connection, “short-term” refers to time periods too short for interest rates to play the major role.)

When the interest rates prevailing in the market are not real interest rates but the artificial interest rates controlled by a central authority, that means that rates are not performing their vital coordinative function. And that means that future investments fail because investors were responding to a false market signal, one that told them that savers wanted more future goods and services in the future than were actually wanted. Having been burned very badly by this process just a few years ago, investors evidently aren’t about to be suckered again. They’re sitting things out, waiting for markets to normalize so they can invest in a market environment that works instead of one that fails. (The exceptions are situations in which “the fix is in;” when investors can get subsidies from government or are sure they will be bailed out in case of failure.)

If this comes as a surprise, it shouldn’t. Over a 70-year period, the Soviet Union tried to live without functioning capital markets. Any mention of interest rates was verboten in Communist circles, but after a while the need for intertemporal coordination in production was so crying that Soviet planners had to invent the concept of an interest rate. But they couldn’t call their invention an interest rate without risking execution, so they called it an “efficiency index.” Alas, merely calling it that did not actually give it the coordinative properties possessed by genuine market interest rates and the Soviet economy collapsed under the weight of its failures in the late 1980s. Similarly, the Chinese Communist economy got nowhere until, in desperation, Deng Xiaoping liberated market forces sufficiently to allow flexible prices and interest rates to prevail in an independent, competitive sector of the Chinese economy. And it was this sector that thrived and promoted Chinese economic growth, while the official, government-controlled sector stagnated.

More and more, respected commentators and observers across the spectrum are speaking out about the untenable status quo into which the Fed has forced us. The speech usually takes the form of grumbling about the need for return to a “more normal” policy. Of course, the problem is that any sort of normal policy is now impossible given the box we are in, but the point is that recognition of the harm caused by ZIRP and QE is becoming general.

So the Fed can’t just sit tight either, much as it would like to. The pressure to change the status quo has built up and is growing by the day. If the Fed continues to stall, it will be obvious to all and sundry that its so-called political independence is a fiction and that its policy is aimed at saving the government’s skin by preserving deficit finance and stalling off fiscal reform.

Actually, the proper metaphor for our current dilemma is probably that of a man riding a tiger. Once the man is atop the tiger, he faces a pair of impossible, or at least wildly unattractive, options. If he gets off, the tiger will kill and eat him. But if he stays on, he will be scratched, clawed and whipsawed to death eventually. Really, the question he must be asking himself as he tries desperately to hang on is: How in the world did I ever get myself in this position?

That question is purely academic to the man on the tiger but vitally important to us as we contemplate the Fed’s dilemma. How in the world did the Fed every get itself in this no-win situation? What made it seem attractive for the Fed to follow a policy that now seems disastrous? Alternatively, what made it seem necessary?

The Keynesian Link With ZIRP: Keynes’ Embrace of Marx

Close students of John Maynard Keynes know that Keynesian economic theory was mostly the work of Keynes’ followers. Students like Nicholas Kaldor, Piero Sraffa, Joan Robinson, Richard Kahn and John Hicks made numerous contributions to the theory that eventually dominated macroeconomics textbooks for some four decades and still survives today in skeletal form.

Nobel laureate Paul Samuelson once observed that Keynes’ General Theory was a work of genius in spite of its poor organization, confusing theoretical structure and intermittent moments of inspiration. Even more pertinent to our present predicament is that the second half of The General Theory leaves economics behind and takes up the cause of social policy.

Keynes faulted capitalism for its preoccupation with what he called the “fetish of liquidity.” It was the capitalist’s insistence on liquidity that underlay the speculative demand for money, which created idle balances that thwarted the expenditure of money necessary to purchase the short-term full-employment level of output. The payment of interest similarly thwarted the level of investment requisite for long-term full-employment. Capitalism would have to be supplanted with a kind of quasi-socialism in order for the market order to be preserved.

The linchpin of this new, stable market order would be a government-directed investment policy specifically intent on driving the rate of interest to zero by injecting fiat money as necessary.Only then would long-term investment would be maximized because the marginal efficiency of investment would be zero. (Another way of characterizing this outcome would be to say that all possible benefit would be squeezed out of investment.) Reading this second section of the General Theory makes it clear that Keynes was the original impetus behind ZIRP.

Keynes’ antipathy towards capitalism and the charging of interest brought him into general sympathy with Marx. Although they reached their respective conclusions by different routes, they both fervently sought the negation of capital markets and the castration of capitalism. Keynes felt he was preserving the institution of private property while Marx sought to destroy it, but in practice Keynesianism and Marxism have had similar effects on free markets and private property.

Should we be surprised, then, that Keynesians in Japan and the U.S. unveiled ZIRP to the world? Certainly not. ZIRP was the deep-seated secret desire of their hearts, the long-denied, long-awaited desideratum for which the financial crisis finally provided the pretext.

Reconsidering the Financial Rescue

Malekan no doubt echoed the views of most when he blandly observed that “although at the time few could argue with the need for such extraordinary Fed action.” He then went on to insist that things were different now and ZIRP and QE had outlived their usefulness and were no longer needed. But our full analysis suggests something quite different. If the Fed’s actions got us into a box from which there is no escape, then the only answer to the dilemma we face today is: Don’t get ourselves into this situation in the first place.

That means that we shouldn’t have ratcheted up federal-government debt with the Obama stimulus – or, for that matter, the Bush stimulus that preceded it. That conclusion will not resonate with most observers, given the overwhelming consensus that we had to do something to prevent the recurrence of a 1930s-style Depression and that massive government stimulus was the only thing to do. But we certainly aren’t forced to take that consensus verdict at face value now, six years after the fact. Six years ago, we felt under time pressure to do something fast, before it was too late. Now we have the luxury of retrospective review.

Neither stimulus lifted the U.S. economy out of recession. The Obama stimulus had hardly been spent when the U.S. economy officially emerged from recession in June, 2009. The unemployment rate declined with painful slowness in the six years after the stimulus, notwithstanding that academic students of economics are taught that the only theoretical rationale for preferring stimulative policies is that they act faster than waiting for markets to eliminate unemployment on their own. There is compelling evidence that the decline in unemployment resulted mostly from long-term departures from the labor force and elimination of unemployment-benefit extensions rather than from job creation. Malekan remarks that “the fact that there is a debate about a quarter-point rate hike tells us that extraordinarily low interest rates have mostly failed to deliver a robust recovery. That people opposed to even the tiniest increase in rates are resorting to hyperbole tells us that they too know this.” And what did we get for what Malekan calls “modest benefits,” but what we can see are really almost no benefits but a flock of trouble? We are riding a tiger with no way out of the fix that confronts us.

Although the reflex action of critics and commentators was to blame the financial crisis and the Great Recession on the usual suspects – greedy capitalists, Wall Street and deregulation – the passage of time has produced numerous studies decisively refuting this emotive response. The roster of government failures at the local, state and federal level was so lengthy that no single study has comprehensively included them all. That lengthy list is the only bit of evidence implying that things could have been worse than they actually were. Everything else – a priori logic and the long history of recessions since the founding of the republic – leads us to think that if left alone to recover, the U.S. economy would be vastly better off now than it actually is.

James Grant has recently written at book length about the severe U.S. recession of 1920-1921, which lasted no more than eighteen months despite no countercyclical government action at all. This is a template for government (in-) action in the face of impending recession. We have tried every form of preventive, stimulative and recuperative remedy the mind of man can devise and they have all failed. Maybe, if we’re lucky, we will someday have the chance to try the free-market cure.

DRI-178 for week of 2-8-15: A Closer Look at Prices

An Access Advertising EconBrief:

A Closer Look at Prices

There is no more important tool in the economist’s kit than the price of a good or service. Microeconomics was formerly called “price theory.” That conveys the correct impression that the theories of household, firm and input behavior are best characterized as processes of price formation.

Basic economics texts painstakingly develop the fundamentals of market pricing. This is necessary; we must crawl before walking, walk before running and run in order to stay in shape. But if all economists did was endlessly draw simple supply and demand curves and point meaningfully to the intersection of two lines, they would spend all their working hours in the classroom teaching undergraduate students. In order to avoid this ignominious fate, economists have had to grapple with the multifarious pricing schemes, strategies and tactics encountered in actual practice.

If market price equates the ex-ante quantity demanded and quantity supplied of a good, how do we explain the existence of sales? In particular, what about the perennial favorite, the after-Christmas sale? What are we to make of the market for coupons, which has been estimated at approximately $1 billion in face value in the U.S.? How should we evaluate the phenomenon of the manufacturer’s rebate, a practice that has proven almost as hard for economists to accept as it is for consumers to handle?

For many years, economists ruminated over these matters in the isolation of scholarly journals. Over the last decade or so, their musings have been publicized in books on popular economics. The idea is to make economic logic not merely accessible, but downright useful, to the masses.

How are we doing? You be the judge.

The After-Christmas Sale 

No class of beginning economics students would be at a loss to explain the origin and purpose of the after-Christmas sale. Everybody knows that retails stores make their bones at Christmas time. They place orders with sugarplum visions of Christmas sales dancing in their heads. Then comes the dawn on December 26th, and all those after-Christmas inventories have to be disposed of. Ideally, this should be achieved before yearend, to keep carryover inventories as low as possible for tax purposes. So, to make a virtue of necessity, the after-Christmas sale is born.

As with many a popular explanation for familiar economic phenomena, the beauty of this one is only epidermal. Sure, anybody can make a mistake – even a retail buyer. But the same mistake? Year after year after year? Buying for the most crucial segment of the store’s calendar year, on which its annual profitability depends? Should we suppose, then, that the professional life expectancy of a retail buyer is exactly one year – hired every January and fired every December 31 after annually misjudging the Christmas demand for the store’s product(s)?

No, this story does not withstand close scrutiny. Something else is at work here. What is it?

Economists believe two factors account for after-Christmas sales. Richard McKenzie identifies the first in his survey of pricing, Why Popcorn Costs So Much at the Movies: And Other Pricing Puzzles. To an economist, the most salient feature of the after-Christmas sale is an identical good, sold at two radically different prices in remarkably close temporal succession. One day, good X sells for price Y. The very next day, the same good X sells for Y – [anywhere from 15% to 75%].

When economists encounter simultaneous price differentials for sales of the same good, they label the practice price discrimination. Economists use familiar words in their own inimitable way, and in this case the word “discrimination” need not be pejorative. (Technically, the practice carries antitrust penalties if sellers use it as a device to impede competition.) Sellers practice price discrimination to exploit differential price-sensitivities among their customers – by charging a higher price to less price-sensitive buyers and a lower price to more price-sensitive buyers, the seller can earn more total revenue than by charging a single price to all buyers.

Why does this tactic increase total revenues? The common sense of it is this: For price-insensitive buyers, the stimulative effect of the price increase on revenue outweighs the slight fall in purchases, while for price-sensitive buyers, the stimulative effect on revenue of the increase in sales caused by a lower price outweighs the effect of a lower price.

If this sounds almost too good to be true, we should hasten to add that this condition does not exist for all goods and services and sellers may not be able to exploit it even if it does. But the possibility is enticing.

In our after-Christmas sale, the general idea of price discrimination applies but the context is atypical. Instead of charging different prices to different buyers at the same point in time, sellers are charging different prices to the same buyers at different points in time. But the motivation is exactly the same – to earn more total revenue than would be earned by keeping price the same throughout. The “different points in time” are these: before Christmas and the day after Christmas (continuing for the duration of the sale, perhaps to Dec. 31).

Why are the different prices charged? That’s the easy part – most people are much less price-sensitive during the Christmas season and much more price-sensitive as soon as Christmas is over. And unlike many contexts, in which it may take keen analysis to distinguish the price-sensitive buyers from the price-insensitive ones, there is no buyer-identification problem to plague sellers. Sellers just change price tags at midnight on Christmas or, more conveniently, at closing time on Christmas Eve.

Viewed in this light, it is obvious that after-Christmas sales are not mistakes. Obviously, sellers want to have enough inventories on hand to take advantage of high-priced Christmas demand, but they do not expect to sell out or even come close. They have factored in the price-sensitive demand after Christmas. Indeed, they look upon the existence of the Christmas holiday as built-in market segmentation. The inherent problems facing any potential exercise of price discrimination are two: first, effectively dividing the market into price-sensitive and price-insensitive buyers; and second, preventing resales of the good or service by the first group to the second. The Christmas holiday solves both problems automatically with temporal separation. People are automatically willing to pay higher prices before Christmas and automatically unwilling to pay more afterwards. And low-price buyers cannot resell to high-price buyers because the high-price buyers already made their purchases first. And, as an extra added attraction, the low price after Christmas will lure new buyers to the product who would never try the product under a single-price policy.

When we turn our attention to other types of sale, we find that price discrimination still plays a big explanatory role. Future EconBriefs will tackle some of these advanced cases. It is certainly not impossible that sellers can occasionally over-order stock and may need to reduce inventories; a sale may well be the expedient means to recover from this error. But we should be wary of imputing systematic errors to experienced market participants. Systematic errors result in insolvency – in which case, the erring seller isn’t around to repeat the mistake.

Coupons

Discount coupons have been around for well over a century. Your parents and grandparents have seen them throughout their lives. Today they can be found in newspapers, magazines, circulars and online. Has it ever occurred to you to wonder why they exist? After all, coupons are costly to produce and distribute – “costly,” that is, in the economic sense that the resources necessary to make and provide them have alternative uses.

The total cost to manufacturers has been estimated (for example, by McKenzie) at around $1 billion for the approximately 153 billion coupons distributed to Americans. (These figures are roughly a decade old and may be somewhat lower today.) If the purpose of coupons is simply and solely to discount the price of the product, might it be both simpler and cheaper to just lower the nominal market price rather than issue coupons?

No, not necessarily. Various arguments bolster the use of coupons. Some of them are transparent. Others are much less clear but just as compelling.

The simplest is the notification effect. Consumers cannot buy a product if they are ignorant of its existence. A coupon is a simple and relatively cheap way of announcing a new product and making it cheap for consumers to try it.

Coupons that reward repeat purchases strive to create brand loyalty. For years, some economists have distrusted markets, doubted the ability of consumers to make rational choices and celebrated the effectiveness of government intervention in markets. They have decried efforts to promote brand loyalty as wasteful, inefficient and downright evil. But it is difficult to see why a repeat purchase should be any more inimical to a consumer than an initial one. Apart from powerful narcotic goods, goods or services do not possess addictive qualities. Once the terms of the coupon have been fulfilled, the consumer is back at square one – but with the added knowledge gleaned from multiple trials of a new product. How bad can this be? It may well be a good thing indeed if the consumer’s brand loyalty reflects a genuine preference – and who is the economist to doubt that?

Still, the most-often cited rationale for coupon issuance is price discrimination. As noted above, sellers want and need to identify and separate the price-sensitive from the price-insensitive among their customers. Coupon distribution is a tried-and-true means to that end.

The term found in economics textbooks to characterize the degree of price sensitivity among buyers is price elasticity of demand. (The rule of thumb among economists is to shun common, ordinary, easily grasped words and phrases in favor of unusual, esoteric, obscure terms – preferably in a foreign language.) Factors conducing to high price elasticity are the existence of copious substitutes for the good, low real incomes and a price that comprises a high fraction of the buyer’s real income.

The collection and use of coupons takes up the buyer’s time. It takes time to collect the coupons – time to hunt them up in their various sources, time to clip and save them, time to gather them again before shopping and dig them up when paying. This time is economically significant. Time has alternative uses. The time of low-income people has a lower alternative-use value than that of higher-income people. And the value gained from the coupon comprises a larger fraction of the low-income person’s real income than it does of the higher-income person’s income, so low-income people rate to gain more from coupon use. For these reasons, we expect low-income people to be more price-sensitive and also to be more avid users of coupons. Thus, coupon distribution is a relatively cheap and effective way for sellers to segment their market into price-sensitive and (relatively) price-insensitive buyers. Thanks to the coupons, the price-sensitive buyers arrange to pay lower prices and the price-insensitive buyers pay the nominal (higher) price.

Since the costs of coupon production and distribution are low on a per-unit of production basis, sellers gain total revenue from coupon distribution even when the costs of coupons are factored into the accounting.

Economist Steven Landsburg, another noted exponent of popular economics in works such as The Armchair Economist and More Sex Is Safer Sex, has made another, related argument for coupons. This is the peak-load pricing argument.

Some businesses face a demand for their product(s) that varies dramatically according to time of day or season. If demand at the highest (peak) time exceeds or strains their capacity, it is in their interest to shift some of this peak demand to off-peak times. Electric utilities are one famous example of this phenomenon. At one point, movie theaters were another case, and this gave rise to twilight-hour movie pricing. These days, though, it’s a rare movie indeed that strains the capacity of a movie theater. Popular bars started the practice of “Happy Hour” to shift some of the late evening clientele to the early evening. Uber has seized upon the idea of charging higher prices during rush hours, something taxicabs should have been done years ago but didn’t.

Landsburg noticed that coupon-clippers are disproportionately retirees, who tend to be low-income, price-sensitive people. They can shop in grocery stores in off-peak times such as mid-morning and mid-afternoon, while full-time workers must shop on their way to or from work. The working population has higher incomes and tends to be less price-sensitive on net balance. Obviously, the opportunity cost of taking off work makes grocery shopping during the off-peak disproportionately expensive for working people. But the coupons do make it more attractive for them to shop in the one off-peak time that is convenient for them – namely, the after-dinner hours. So, by distributing coupons, grocery stores can shift some of their demand from the morning and (particularly) evening rush-hour peaks to the off-peak, thereby lessening their staffing problems and increasing their total revenue and improving their competitive position relative to convenience stores.

Manufacturer’s Rebates

The $1 billion in annual coupon distributions are dwarfed by the estimated $6 billion in the value of manufacturer’s rebates offered annually. McKenzie estimates that “a third of all personal computers and their peripherals, and a fifth of all digital cameras, camcorders, and LCD TVs are sold with rebate offers.” He cites previous research suggesting that the total number of rebate offers approaches 400 million annually.

As most people already know, a rebate offers the return of money expended for a good or service in return for showing proof of purchase. That showing generally demands a fair amount of the buyer’s time and trouble – mailing in a “proof of purchase” (such as a receipt), perhaps accompanied by one or more completed documents, within a specified time period (called the “redemption period”). The redemption period may vary from a week to a year, but once it is exceeded the customer’s rebate privileges are lost.

Given the prominence of rebates on the retail landscape, it is not surprising that they have evolved a unique vernacular. The term lift is defined as the increase in sales stimulated by a given manufacturer’s rebate. Breakage is the percentage of customers who do not seek, or fail to obtain, a rebate during the redemption period. Slippage is the percentage of customers who obtain the rebate but then fail to cash (!) their rebate checks.

The existence of this vernacular implies that there is many a slip between the rebate cup and the consumer’s lip. According to self-styled consumer advocates, the slips constitute “rebate abuse.” Sellers may deliberately – how could it be inadvertent? – specify short redemption periods (say, one week), discontinuous with purchase (perhaps starting three weeks after purchase). The purpose behind such tactics is clear – to frustrate attempts at redemption.

Naturally, this will not endear the seller or the product to consumers. Perhaps, though, the company is on the ropes and facing insolvency; its managers are staging a desperate “hail Mary” promotion to raise cash. The company is willing to risk offending rebate redeemers when facing commercial oblivion. Since the alternative is simply to sell the product for its nominal price, consumers cannot be deemed worse off for being offered a rebate, however strict the terms – unless we consider the rising blood pressure and indignation they suffer upon reading the fine print in the rebate terms as part of their cost.

Natural curiosity leads to the question: What is the average rate of redemption, anyway? Economists believe the average rate may be as high as 40-50 %. The term “average” (or mean, as statisticians call it) is a measure of central tendency. It is useful by itself, but vastly more useful if contemplated alongside the amount of variance around that central tendency. In this case, the variance is huge.

Some rebate offers produce redemption rates at or nearing 100%. McKenzie cites the case of firms producing digital scanners whose rebate offers were universally redeemed – after which the companies went out of business! Before writing this off as colossal misjudgment, we should ponder the not unlikely possibility that the companies were seeking a last-ditch lift from rebates to avoid insolvency, hoping that breakage would be their salvation. In the digital age, fierce competition and low prices have been the handmaidens of technological innovation.

At the other extreme, sometimes the redemption rate is virtually nil. McKenzie quotes one retailer whose comments are quite revealing, if not perceptive: “Manufacturers love rebates because redemption rates are close to none… they get people into stores, but when it comes time to collect, few people follow through. And this is just what the manufacturer has in mind.” As McKenzie notes, this is wrong not just in practice but also in theory. If rebates were really a guaranteed way to increase sales, everybody would use them. The truth is much more complicated, hence more interesting.

By now, readers can begin to appreciate the basic strategy behind rebates. Manufactured products such as computers and printers carry price tags large enough to stimulate price sensitivity among many consumers. It is in the interest of sellers to sort out the price-sensitive from the price-insensitive buyers and charge differential prices. This is not easy; sellers must segment their market and prevent resales by low-price buyers to high-price buyers.

Manufacturer’s rebates perform market segmentation for the same reasons that coupons do. They are attractive to price-sensitive lower-income buyers whose time has a lower value and who therefore are more willing to take the time to comply with rebate terms. Thus, this is the market segment that actually gets the lower price; that is, the market price discounted by the amount of the rebate.

McKenzie also points out that rebates affect – and are affected by – the reputation of the company offering them. This can give them the quality of a “self-enforcing contract.” (The term was first used by University of Chicago economist Lester Telser.) He uses Dell Computers as a case in point. When Dell offers a rebate, consumers take it seriously. They know Dell will follow through on terms because any slip-ups would harm its reputation – something Dell can ill afford. In turn, that makes Dell’s rebate promotions that much more effective in terms of their lift. So even though Dell’s breakage will be minimal, its lift will be maximal – giving it a solid, consistent, dependable return on its rebate program.

Manufacturer’s rebates may be the most controversial of all pricing policies because their terms offer such scope for variation and their results are so variable. Any detrimental effect on consumers resulting from a manufacturer’s rebate cannot help but be small, not to say miniscule. Related complaints are purely emotional – which makes them an ideal topic for the political left wing, which is bereft of intellectual content and must rely entirely on emotive appeals.

Prices and Information

In his Preface, McKenzie quotes the famous passage from F. A. Hayek’s “Economics and Knowledge,” in which the late Nobel laureate describes the value of prices as collectors and transmitters of information. The various pricing practices of sellers put this feature on display. We cannot contemplate any central authority possessing or acquiring the quantity or quality of information that is routinely exchanged by the price system. Sellers have the strongest possible incentive to benefit consumers, while a central authority’s only institutional incentives are political. The more one learns about market pricing, the stronger the case for it becomes.

A future EconBrief will explore the links between market pricing and the evolutionary development of the human brain.

DRI-162 for week of 2-1-15: It Happens Every Season

An Access Advertising EconBrief:

It Happens Every Season

The Super Bowl has come and gone. And with it have come stories on the economic benefits accruing to the host city – or, in this case, cities. The refrain is always the same. The opportunity to host the Super Bowl is the municipal equivalent of winning the Powerball lottery. Thousands – no, hundreds of thousands of people – descend on the host city. They focus the world’s attention upon it. They “put in on the map.” They spend money, and that money rockets and ricochets and rebounds throughout the local economy with ballistic force, conferring benefits left, right and center. We cannot help but wonder – why don’t we replicate this benefit process by bringing people and businesses to town? Why wait in vain on a Super Bowl lottery when we can instead run our own economic benefit lottery by offering businesses incentives to relocate, thereby redistributing economic benefits in our favor?

It happens every winter. In fact, publicity about economic development incentives (EDIs) is always in season, for they operate year-round. Nowadays almost every state in the union has a government bureau with “economic development” on its nameplate and a toolkit bulging with subsidies and credits.

For years, the news media has mindlessly repeated this stylized picture of EDIs, as if they were all repeating the same talking points. Both the logic of economics and empirical reality vary starkly from this portrait.

EDIs In a Nutshell

The term “EDIs” is shorthand for a variety of devices intended to make it more attractive for particular businesses to relocate to and/or operate in a particular geographic area. The devices involve either taxes or subsidies. Sometimes a business will receive an outright grant of money to relocate, much as an individual gets a relocation bonus from his or her company. Sometimes a business will receive a tax credit as an inducement to relocate. The tax credit may be of specified duration or indefinite. Sometimes the business may receive tax abatement – property tax abatements are especially favored. Again, this may be time-limited or indefinite. Sometimes the tax or subsidy is implicit rather than explicit. Sometimes businesses will even receive production subsidies in excise form; that is, a per-unit subsidy on output produced.

Various forms of implicit or in-kind benefit are also offered. These include grants of land for production facilities and exemption from obligations such as payment for municipal services.

These do not exhaust the EDI possibilities but the list is representative and suggestive.

A Short, Sour History of EDIs

Proponents of EDIs indignantly reject the charge that their ideas are new. On the contrary, government favors to business trace back to the early years of the republic, they insist.

It is certainly true that the early decades of the 19th century saw a boom – today, we would call it a “bubble” – in the building of canals, primarily as transportation media. The Erie Canal was the most famous of these. Although the canals were privately owned, they were heavily subsidized and supported by government. Are we surprised, then, that the canal boom went bust, sinking most of its investors like sash weights? Railroads are traditionally given credit for spearheading U.S. economic development in the 19th century, and the various special favors they won from state and local governments are legendary. They include subsidies and extravagant rights of way on either side of their trackage. But economist Robert Fogel won a Nobel Prize for his downward revision of the importance of railroads to the economic growth of 19th-century America, so there is less there than meets the mainstream historical eye.

The modern emphasis on EDIs can be traced back to the state industrial finance boards of the 1950s. These became more active in the late 1960s and 70s when the national economy went stagnant with simultaneous inflation and recession. Like European national governments today, state and local governments were trying to steal businesses from each other. They lacked central banks and the power to print money, so they couldn’t devalue their currencies as European nations are now doing serially. Instead, they used selective economic benefits as their tools for redistributing businesses in their favor. And, like Europe today, they found that these methods only work as intended when employed by the few. When everybody does it simultaneously, they cancel each other out. One state steals Business A from another, but loses Business B. How do we know whether that state has gained or lost on net balance? We don’t, but in the aggregate nobody wins because businesses are simply being reallocated – and not for the better. Of course, we haven’t yet stopped to consider whether the state even gained from wooing Business A in the first place.

We can look back on many celebrated startups and relocations that were midwived by EDIs. In Tennessee, Nissan got EDI subsidies for relocating to the state in 1980. Later, GM built its famous Saturn plant there. In both cases, the big selling point was the large number of jobs ostensibly created by the project. We can get some idea of the escalation in the EDI bidding sweepstakes by comparing the price-tag per job over time. The Nissan subsidies cost roughly $11,000 per job created. At this price, it is hard to envision an economic bonanza for the host community, but compare that to the $168,000 per job created that went to Mercedes Benz for relocating to Alabama in 1993. In 1978, Volkswagen promised 20,000 jobs for the $70 million it got for moving to Pennsylvania, but ended up delivering only about 6,000 jobs before closing the plant within a decade.

There is every reason to believe that these results were the rule, not the exception. Economists have identified the phenomenon known as the “winner’s curse,” in which winning bidders often find that they had to bid such a high price to win that their benefits were eaten up. Economists have long objected to the government practice of setting quotas on imported goods because the quota harms domestic consumers more than it benefits domestic producers. Moreover, governments customarily give import licenses to politically favored businesses. Economists plead: Why not open up the licenses to competitive bid? That would force would-be beneficiaries of the artificial shortage created by the quota to eat up their monopoly profits in the price they pay for the import license. Then taxpayers would benefit from the revenue, making up for what they lose in consumption of the import good. This same principle prevents cities from benefitting when they “bid” against other cities to lure firms by offering them subsidies and tax credits – they have to offer the firm such lucrative benefits to win the competition against numerous other cities that any benefits provided by the relocating business are eaten up by the subsidy price the city pays.

The Economics of Business Location

The general public probably envisions an economic textbook with a chapter on “economic development” and tips on how to lure businesses and which types of business are the most beneficial, as well as tables of “multiplier” benefits for each one.

Not! The theory of economic development is silent on this subject. The only applicable economic logic is imported from the theory of international trade. The case of import quotas provided on example. The specter of European nations futilely trying to outdo each other in trashing the value of their own currencies is another; international economists use the acerbic term “beggar thy neighbor” to characterize the motivation behind this strategy. It applies equally to states and cities that poach on businesses in neighboring jurisdictions, trying to lure them across state or municipal boundaries where they can pay local taxes and provide prestigious photo opportunities for politicians.

What about the Keynesian theory of the “multiplier,” in which government spending has a multiple effect on income and employment? Even if it were true – and all major Keynesian criticisms of neoclassical theory have been overturned – it would apply only under conditions of widespread unemployment. It apply only to national governments that can control policies for the entire nation and have the power to control and alter the supply of money and credit and rates of interest. Thus, the principle would be completely inapplicable to state and local governments anyway.

Economists believe that there is an economically efficient location for a business. Typically, this will be the place where it can obtain its inputs at lowest cost. Alternatively, it might be where it can ship its output to consumers the cheapest. If EDIs cause a business to locate away from this best location by falsely offsetting the natural advantages of another location, they are harming the consumers of the goods and services produced by the businesses. Why? The business is incurring higher costs by operating in the wrong location, and these higher costs must be compensated by a higher price paid by consumers than would otherwise be true. That higher price combines with the subsidies paid by taxpayers in the host community to constitute the price paid for violating the dictates of economic efficiency.

Why do economists obsess over efficiency, anyway? The study of economics accepts as a fact that human beings strive for happiness. In order to attain our goals, we must make the best use of our limited resources. That requires optimal consumer choice and cost minimization by producers. When government – which is a shorthand term for the actions of politicians, bureaucrats and lower-level employees acting in their own interests – muck up the signaling function of market prices, this distorts the choices made by consumers and producers. Efficiency is reduced. And this effect is far from trivial. A previous EconBrief discussed an estimate that federal-government regulations since 1949 have reduced the rate of economic growth in the U.S. by a factor of three, implying that average incomes would be roughly $125,000 higher today in their absence.

EDIs are a separate issue from regulation. They are more recent in origin but growing in importance. In 1995, the Minneapolis Federal Reserve published a study by economists Melvin Burstein and Arthur Rolnick, entitled “Congress Should End the Economic War Between the States.” At about the same time, the United Nations published its own study dealing with a similar phenomenon at the international level.

Borrowing once again from the theory of international trade, these studies view production in light of the principle of comparative advantage. Countries (or states, or regions, or cities, or neighborhoods, or individual persons) specialize in producing goods or services that they produce at lower opportunity cost than competitors. Freely fluctuating market prices will reflect these opportunity costs, which represent the monetary value of alternative production foregone in the creation of the comparative-advantage good or service. Free trade between countries (or states, regions, cities, neighborhoods or persons) allows everybody to enjoy the consumption gains of this optimal pattern of production.

Burstein, Rolnick, the U.N., et al felt that politicians should not be allowed to muck up free markets for their own benefit and said so. That debate has continued ever since in policy circles.

The Umpires Strike Back: EDI Proponents Respond 

Responses of EDI proponents have taken two forms. The first is anecdotal. They cite cases of particular successful EDI regimes or projects. The cited case is usually a city like Indianapolis, IN, which enjoyed a run of success in luring businesses and a concurrent spurt of economic growth. A less typical case is Kansas City, KS, which languished for several decades in prolonged decay with a deserted, crumbling downtown area and crime-ridden government housing projects and saw its tax base steadily disintegrate. The city subsidized a NASCAR-operated racing facility on the western edge of its county, miles away from its downtown base. It also subsidized a gleaming shopping and entertainment district slightly inward of the racetrack. Both NASCAR and the shopping district have benefitted from these moves, and politicians have claimed credit for revitalizing the city by their efforts. A recent Wall Street Journal column described the policy has having revamped “the city and its reputation.”

The second argument consists of a few studies that claim to find a statistical link between the level of spending on EDIs and the rate of job growth in states. Specifically, these studies report “statistically significant” relationships between those two variables. This link is cited as justification for EDIs.

Both these arguments are extremely weak, not to say specious. It is widely recognized today that most investors are foolish to actively manage their own stock portfolios; e.g., to pick stocks in order to “beat the market” by earning a rate of return superior to the average rate available on (say) an index fund such as the S&P 500. Does that mean that it is impossible to beat the market? No; if millions of investors try, a few will succeed due to random chance or luck. Another few will succeed due to expertise denied to the masses.

Analogous reasoning applies to the anecdotal argument made by EDI proponents. A few cities are always enjoying economic growth for reasons having nothing to do with EDIs – demographic or geographic reasons, for example. With large numbers of cities “competing” via EDIs, a few will succeed due to random chance. But this does not make, or even bolster, the case for EDIs. Indeed, the use of the term “competition” in this context is really false, because cities do not compete with cities – only concrete entities such as businesses or individuals can compete with each other. It is really the politicians that are competing with each other. And this form of competition, quite unlike the beneficial form of competition in free markets, is inherently harmful.

This sophisticated rebuttal is overly generous to the anecdotal arguments for EDIs. Even if we assume that the EDIs produce a successful project – that is, if we assume that Saturn succeeds at its Tennessee plant or NASCAR thrives in Kansas City, KS – it by no means follows that one company’s gains translate into areawide gains in real income. A study by the late Richard Nadler found no gains at all in local Gross Domestic Product for Wyandotte County, in which Kansas City, Kansas resides, years after NASCAR had arrived. The logic behind this result, reviewed later, is straightforward.

The studies claiming to support EDIs lean heavily on the prestige of statistical significance. Alas, this concept is both misunderstood and misapplied even by policy experts. Its meaning is binary rather than quantitative. When a relationship is found “statistically significant,” that means that it is unlikely to be completely random or chance but it says nothing about the quantitative strength or importance of the relationship. This caveat is especially germane when discussing EDIs, because all the other evidence tells us that EDIs are trivial in their substantive effect on business location decisions.

For decades, intensive surveys have indicated that business executives select the optimal location for their business – then gladly take whatever EDIs are offered. In other words, the EDI is usually irrelevant to the actual location decision. But executives seal their lips when it comes to admitting this fact openly, because their interests lie in fanning the flames of the Economic War Between the States. That war keeps EDIs in place and subsidizes their moves and investments.

Thus, a statistical correlation between EDIs and job growth is not a surprise. But no case has been made that EDIs are the prime causal mover in differential job growth or economic growth among states, regions or cities.

Perhaps the best practical index of the demerits of EDIs would be the economic decline of big-spending blue states in America. These states have been high-tax, high-spending states that heavily utilized EDIs to reward politically favored businesses. This tactic may have improved the fortunes of those clients, but it has certainly not raised the living standards of the populations of those states.

If Not EDIs, What? 

It is reasonable to ask: If EDIs do not govern the wealth of states or cities, what does? Rather than offer selective inducements to businesses, governments would do better to offer across-the-board inducements via lower tax rates to businesses and consumers. Studies have consistently linked higher rates of economic growth with lower taxes on both businesses and individuals throughout the U.S.

Superficially, this strikes some people as counterintuitive. The word “selective” seems attractive; it suggests picking and choosing the best and weeding out the worst. Why isn’t this better than blindly lower taxes on everybody?

In fact, it is much worse. Government bureaucrats or consultants are not experts in choosing which businesses will succeed or fail. Actually, there are very few “experts” at doing that; the best ones attain multi-millionaire or billionaire status and would never waste their time working for government. Governments fail miserably at that job. Better to allow the experts at stock-picking to pick stocks and relegate government to doing the very, very few things that it can and should do.

States and municipalities typically operate with budget constraints. They cannot create money as national governments can and are very limited in their ability to borrow money. So when they selectively give money to a few businesses with subsidies or tax credits, the remaining businesses or individuals have to pay for that in higher taxes. If lower taxes for a few are good for that few, then it follows that higher taxes for the rest must be bad for the rest. And this means that even if the subsidies promote success for the favored business, they will reduce the success of the other businesses and reduce the real incomes of consumers. In other words, the “economic development” promoted by government’s “subsidy hand” will be taken away by government’s “tax hand.” What the government giveth, the government taketh away. Oops.

Lower taxes for everybody work entirely differently. They change the incentives faced at the margin, causing people to work, save and invest more. The increased work effort causes more goods and services to be produced. The increased saving makes more financial resources available for investment by businesses. The increasing investment increases the amount of capital available for labor to work with, which makes labor more productive. This increased productivity causes employers to bid up wages, increasing workers’ real incomes.

Lest this process sound like a free lunch, it must be noted that unless the increased incomes are self-financing – that is, unless the increased incomes provide equivalent tax revenue at the lower rates – government will have to reduce spending in order to fulfill the conditions for stability. Since modern government is wildly inflated – heavily bureaucratized, over-administered and over-staffed as well as obese in size – this should not present a theoretical problem. In practice, though, the willingness to achieve this tradeoff is what has defined success and failure in economic development at the state and local level.

Markets Succeed. Governments Fail

EDIs fail because they are an attempt by government to improve on the workings of free markets. Free markets have only advantages while governments have only disadvantages. Free markets operate according to voluntary choice; governments coerce and compel. Voluntary choice allows people to adjust and fine-tune arrangements to suit their own happiness; compulsion makes no allowance for personal preference and individual happiness. Since human happiness is the ultimate goal, it is no wonder that markets succeed and governments fail.

Free markets convey vast amounts of information in the most economical way possible, via the price system. Since people cannot make optimal choices without possessing relevant information, it is no wonder that markets work. Governments suppress, alter and distort prices, thereby corrupting the informational content of prices. Indeed, the inherent purpose of EDIs is exactly to distort the information and incentives faced by particular businesses relative to the rest. It is no wonder, then, that governments fail.

Prices coordinate the activities of people in neighborhoods, cities, regions, states and countries. In order for coordination to occur, people should face the same prices, differing only by the costs of transporting goods from place to place. Free markets produce this condition. Governments deliberately interfere with this condition; EDIs are a classic case of this interference. No wonder that governments, and EDIs, fail.

DRI-173 for week of 1-25-15: Anti-Price-Gouging Laws: The Cure Is the Disease

An Access Advertising EconBrief:

Anti-Price-Gouging Laws: The Cure Is the Disease

This week, New York City Mayor Bill De Blasio announced an impending snowfall of two to three feet, accompanied by high winds. In anticipation of the upcoming blizzard, he slapped the city with a travel ban, effective at 11 PM on the following day. Only official snow-clearance and law-enforcement vehicles would be allowed on the streets. He seized the opportunity to remind New Yorkers that the travel emergency would trigger enforcement of New York State’s anti-price-gouging law, which forbids raising prices on goods and services beyond pre-emergency levels. Violations would be punished sternly, he assured his audience.

Oops. In the event, the blizzard forecast proved… er, optimistic in the quantitative sense or pessimistic in the qualitative sense. Snowfall fell short of one foot, causing no end of local grumbling by the ingrates who couldn’t simply be satisfied to avert disaster.

To economists, though, the real disaster isn’t the unavoidable inclement weather that strikes every year, nor is it the occasional failure of accurate weather forecasting. It is the self-infliction of wounds by laws passed to constrain a non-existent practice called “price-gouging.” The law purports to cure a non-existent ailment. The cure is far worse than anything the “disease” could inflict.

State Laws to Prevent and Punish “Price Gouging”

Nobody knows the origin of the term “price gouging.” It probably derives from the exercise of monopolies granted by monarchs under the old English common law, which is where we get the term “monopoly.” Since nobody could legally compete with them, they could figuratively gouge their price from the consumer’s hide without interference.

With the advent of big government in the 20th century, it was only a matter of time until this resentment of sellers was written into law. Legislatures needed a pretext for acting against sellers, though. Academia provided it in the 1930s with the “Imperfect Competition” revolution in economic theory. Led by Edward Chamberlain and Joan Robinson, this school pointed out that few, if any, actual markets corresponded to the textbook definition of a “perfect” market. Perfect competition required that no individual seller supply a sufficient quantity of output to materially influence market price through its pricing and output decisions. It also required that consumers view the output of each seller as homogeneous – otherwise, product quality might confer some degree of market (pricing) power on an individual seller. There should also be no barriers to entry into, or exit from, the market.

So all markets were “imperfect” and all sellers possessed “market power.” This homely truth gave the profession the small opening it needed to make a huge leap of logic: Most sellers were monopolists who must be restrained by the benevolent and enlightened force of government regulation from exercising their monopoly power. This conclusion provided a rationale for government intervention at the level of individual markets, or microeconomics. It was analogous to the role played in the 1930s by Keynesian economic theory in justifying government intervention at the macroeconomic level.

In World War II, the federal government’s Office of Price Administration (OPA) levied price controls, or maximum prices, on hundreds of industries. Although the public rationale for these controls was to prevent inflation, they served to accustom both the public and private business to the notion of government control of the price system. In practice, patriotism was at least as important in enforcing the price controls as inflation-control. Business owners who raised prices were open to charges of “war profiteering.” This was unpatriotic; it was “taking advantage of the crisis to make money” when they should have been “doing their part by sharing the sacrifice” borne by everybody else. In peacetime, the rationale of monopoly regulation could be slipped neatly into the vacuum left by inflation-fighting and patriotism.

In the late 1970s, the U.S. struggled in the throes of an “energy crisis.” The upward spike in oil prices initiated by the Organization of Petroleum Exporting Countries (OPEC) had hit Western industrialized nations hard. Threatened with across-the-board cost increases and associated widespread unemployment, their central banks chose the same remedy that is now being employed: rapid money creation. This created accelerating inflation but did not do much to resolve the unemployment problem. The melding of stagnation and unemployment gave rise to a hybrid term of disaffection, stagflation.

It was against this backdrop that home heating fuel prices in New York State rose dramatically in the fall of 1978. The evolving American tradition was to blame the seller for the underlying conditions of supply and demand giving rise to an existing price. That is just what the New York state legislature did when it passed the first state law proscribing price-gouging. It took four years for Hawaii to produce the second such law in 1983. Connecticut and Mississippi followed suit in 1986. Then came the deluge; eleven more states joined the party in the 1990s and sixteen more in the first decade of the new millennium. Today, 38 states have laws forbidding price-gouging in some form. Just what is it that these laws forbid, anyway?

Amazing as it seems, the answer is far from clear. But the common denominator between the laws is the notion that special circumstances or “emergency” justify a significant curtailment of pricing freedom. When we try to determine what the curtailment is, why it is justified and which circumstances qualify as emergencies, we find ourselves shrouded in ambiguity.

In a reasonable world, a judicial review of these statutes would undoubtedly find them void for vagueness. But that is hardly their worst drawback. Even if it were possible to objectively and precisely define an emergency and specify a quantitative curtailment of price tailored to it, we would not want to do anything so perverse and counterproductive even if we could.

The Economics of Emergency Behavior

How do people behave in emergencies? Why do governments and opponents of free markets object to that behavior? What kind of behavior is desirable in those circumstances?

Consider the example posed by the impending blizzard in New York City. In these situations, people routinely rush to acquire advance stocks of common everyday consumption goods. Included in this category are such goods are food (eggs, milk, water, ice, coffee, soft drinks, bacon, meat), fuel (gasoline, propane, heating oil) and household supplies (toilet paper, light bulbs, paper towels, batteries, radios, shovels, ice melt) and suitable clothing (heavy coats, gloves, hats, boots). The vast majority of this behavior is simply a reallocation of purchases in time, or an intertemporal reallocation of demand. There is nothing invidious or harmful about this. Indeed, it obeys the simply principle of preparedness that we all learned as children, whether in the Boy Scouts or in school.

Governments typically act as though this is the result of panic – as if, because everybody can’t immediately purchase everything they want from stocks immediately on hand, it must be a bad thing. But this is ridiculous. There is no reason to treat this increase in demand differently than any other increase in demand for any other reason. After all, those affected certainly have good reasons for wanting the extra stocks, with their government promising them that a blizzard of unprecedented proportions will certainly descend upon them! The only question is: What is the best way of getting the people the extra goods they need, allowing them to push their purchases forward in time to prepare for the emergency?

The Laws of Supply and Demand are the best means ever invented for solving that problem. They act automatically and immediately without the need for government action or intervention. The Law of Supply says that sellers will produce more output for sale at relatively higher prices. The Law of Demand says that buyers will wish to purchase less relatively high prices. When the blizzard announcement is made, people rush to stores and to their computers to make purchases. At the previously existing prices, people would be willing to purchase vastly increased quantities of goods. But they don’t get those vastly increased quantities - at least, not instantaneously. The Law of Supply says that sellers will be willing to supply larger quantities of output, all right – but only at higher prices. Well, at successively higher prices consumers are progressively less enthusiastic about buying more output – they still want more, mind you, just not as much as they would if price were held rock steady. Eventually, price will rise enough to equate the willingness of sellers to produce and sell more and the willingness of buyers to buy more. In this context, “eventually” means a matter of hours or a day or so.

Notice that the oft-expressed fears of government are shown to be groundless. There is no need for government to step in, regulate price or otherwise prevent an economic disaster caused by panic reactions to the weather disaster. Changes in price induce the necessary changes in behavior that do two things – cause sellers to produce and sell more goods and people to want less. The combination of those two things solves the problem.

There is a second kind of behavioral reaction common to some types of disaster emergencies, such as hurricanes and tornadoes. The disaster may cause large amounts of destruction. This may give rise to additional demand for goods for replacement purposes in addition to the intertemporal reallocation demand just analyzed. The replacement demand case is best considered as an addendum to the first case, by assuming that the price system has solved the reallocation demand adequately but now faces the problem of handling the replacement demand for goods that have been destroyed or damaged by the disaster. This will include many of the same goods mentioned above, but also capital goods and consumer durables such as homes, vehicles, buildings and infrastructure. The goods may be demanded in final form or may need to be reconstructed or repaired, in which case the inputs required will be In demand.

Replacement demand differs from reallocation demand in that the latter merely reallocates demand while the former increases it. Replacement demand actually increases the amount of goods demanded locally. There is obviously some scope for increasing the needed goods by drawing on local stocks and by drawing resources away from the production of other goods, as well as by pressing unused local resources into service. But the only way to fully satisfy replacement demand is by importing goods and resources into the local area from outside; that is, from other cities, states, regions and even countries.

Once again, the price system solves this problem. Higher local prices will increase profits locally; the higher local profits will attract resources from other cities, states and regions. The increase in resources will comprise an increase in supply that will reduce the shortfall in replacement goods. As long as a shortfall exists, local demand will keep prices high. Those high prices will keep profits high enough to attract outside resources. If the affected area is large enough and the time frame long enough, international investment may even be attracted to the area. Only when the shortfall in replacement demand is eliminated will prices no longer signal the need for an inflow of goods and resources from outside.

The cases of reallocation and replacement demand do not exhaust all the possibilities created by emergencies and disasters, but they do handle those targeted by state price-gouging laws. We can see that those laws are a clear case of reinventing the wheel. So far, so bad. Now – how does the reinvention work?

Anti-Price-Gouging Laws: Reinventing the Wheel Square 

So the price system solves the problem posed by the need for emergency disaster planning. Is it possible that anti-price-gouging laws might solve it better? Or fairer?

Anti-price-gouging laws are intended to stop price from rising or, more precisely, to stop price from rising beyond a certain point. In the analysis presented above, the price system solved the problem of emergency disaster planning precisely through the medium of an increasing price. Thus, the laws are an economic contradiction in terms. They seek to solve a problem by denying the solution to the problem. So the only way anti-price-gouging laws could improve on the price system would be by substituting another solution for price increases as a means of getting more goods and resources and persuading people to want fewer goods and resources.

They do not substitute any alternative solution. There is no alternative solution. Instead, they assert that an alternative state of affairs – a lower price and fewer goods and resources – ought to be preferred to the one that the price system would bring about. The laws do not explain or justify the superiority of the alternative they exalt. They just assert it.

The laws are justified by rhetoric. The rhetoric claims to be protecting consumers against rapacious sellers who are taking advantage of them by raising prices in an emergency. This contravenes the basic logic of economic exchange, which says that exchange occurs between a willing buyer and a willing seller. So how can either one be “taken advantage of?” The laws assert that it is “unfair” to charge higher prices in an emergency than under non-emergency conditions. This also contravenes established legal precedent, which defines a “fair price” as one agreed upon by a willing buyer and a willing seller. So how can such a price be “unfair?” It also contravenes centuries of human behavior, during which higher prices have been charged for emergency medicine than non-emergency medicine, for emergency hotel rooms than non-emergency hotel rooms and so on.

Since particular anti-price-gouging laws specify exact limits on price increases during emergencies compared to pre-emergency prices, it behooves us to deal with this specific issue. Take the example of a 10% limit on price increases – which, as it happens, is the limit imposed in more than one state. The emphasis placed on this number by proponents is the “fairness” of a 10% gross margin. But this is a non-sequitur. In the first place, there is not and never has been any objective standard of fairness by which 10% (or any other number) could be adjudged fair.

Now go beyond the issue of fairness to consider the internal logic of the process itself. We previously discussed the dynamic reactions of sellers and buyers, in which each group react to the rising price by, respectively, increasing output and reducing desired purchases while continuing to want more than is available. As price goes up by 1%, 2%, 5%, 9%…the law and its proponents approve the outcomes. But suddenly when the price hits 10% – bang! This adjustment process must stop even when some sellers and buyers want it to continue. This is self-contradictory nonsense; law proponents cannot justify this arbitrary limit without explaining why production and sale above the 10% limit is wrong while it is right below the limit.

Proponents argue that complaints by the citizenry justify restriction of high-priced production and sales. But complaints about speech don’t justify arbitrary restrictions on the First Amendment. The law has not traditionally allowed third parties to prohibit economic transactions among consenting transactors except on moral grounds – and anti-price-gouging laws make no valid moral case.

Another way of looking at this same example is to ask: Why do the laws allow any price increases at all? It would seem that proponents are guiltily aware that people want and need more goods and resources and that price increases are necessary for provision of them. As in the age-old joke (“we’ve already established what you are; now we’re just arguing about the price”), anti-price-gouging proponents have implicitly given up and recognized the truth about economic logic, but are determined to argue about the price of those additional goods and how it is determined.

To sum up briefly, anti-price-gouging laws do not solve the problem they purport to address because they do nothing whatever to provide more goods and resources to local areas affected by disaster emergencies. The rhetoric they assert in support of their claims of fairness – which attempt to persuade constituents that they should be happier with lower prices and fewer goods and resources – is illogical and contradicts common practice and long historical experience.

Anti-price-gouging laws not only reinvent the wheel – they reinvent it square.

Black Markets and Other Costs of Shortages

In wartime, which we can view as the ultimate emergency, governments commonly levy comprehensive wage and price controls that prevent prices from rising at all. These are even more draconian than current anti-price-gouging laws. Governments print money to finance the expenditures necessary to conduct the war. When the printed money finds its way into the income stream, it forms the basis for additional demand for goods and services. Citizens attempt to bid up the prices for goods and services. But the price controls do not allow this to happen. The result is chronic shortages.

Economists know this process well. Every microeconomics textbook describes it. Buyers must incur substantial “shoeleather costs” associated with being first in line to get goods and services; failing to do so may frustrate their purchase desires. Those costs are an increase in the effective economic price paid for the good or service. The quality of goods and services is degraded as sellers try to reduce quality as an alternative to raising price. The existence of a shortage allows sellers to pick and choose the buyers who will be satisfied and those who will be disappointed. If sellers have a taste for discrimination, they can exercise it freely. In efficiently functioning competitive markets, on the other hand, this taste is severely constrained by the fact that market-clearing penalized a seller who discriminates against a willing buyers. The output not sold to that buyer may go unsold – either permanently or for a long interval.

Most pertinent to the example of anti-price-gouging laws is the case of black markets. A short-run shortage means that the highest price a consumer would be willing to pay to get one more unit of the good or service is well above the market-clearing price that would prevail if government had never slapped on the controls in the first place. Of course, that extra-high price is not a legal price. But in an emergency, some consumers will be willing to disregard legal niceties to get their hands on the good or service. And sellers will be willing to violate the law to earn the super-high rate of profit that this super-high price will generate. Thus, conditions are perfect for existence of a black (illegal) market.

By passing anti-price-gouging laws, governments deliberately create the ideal environment for black markets. When black markets flourish, politicians put on their sternest face and solemnly promise to punish the evil, greedy malefactors.

Recent Attempts to Rehabilitate Anti-Price-Gouging Laws

Opponents of free markets now control the political process throughout the world. They hold the upper hand in public discourse. Emboldened by their superior status, they have recently sought to rehabilitate the long-moribund intellectual case for anti-price-gouging laws. We can best summarize these attempts by quoting from a prominent source. Harvard University political philosophy Professor Michael Sandel’s book What’s the Right Thing to Do? argues that economists stress economic welfare and freedom at the expense of virtue.

“Emotion is relevant,” claims Sandel – thereby rejecting millennia of philosophical argument in favor of reason and against emotion. Proponents of anti-price-gouging laws reflect “something more visceral than welfare or freedom. People are outraged at ‘vultures’ who prey on the desperation of others and want them punished, not rewarded with windfall profits… Outrage of this kind is anger at injustice… Greed is a vice, a bad way of being, especially when it makes people oblivious to the suffering of others… Price-gouging laws cannot banish greed, but they can at least restrain its most brazen expression, and signal society’s disapproval of it.” Sandel champions the idea of “shared sacrifice for the common good.”

Sandel’s ideas encapsulate the quintessence of 20th-century liberal though – pure undifferentiated emotion, all logic and intellectual distinctions distilled out. If “emotion is relevant,” where do we start and stop in admitting it into the argument? Obviously, we start with the political constituents of liberals and stop when all its political opponents have been demonized. Subjective terms of opprobrium like “vultures,” “greed” and “windfall profits” have no objective correlative in science or logic. The behavior he complains of has specific economic value in achieving the goals of those he pretends to champion; that is, the greed of the vultures turns out to benefit the outraged sufferers and the windfall profits are the necessary by-product of their deliverance.

When goods and resources flow to the disaster area from the outside, people on the outside have fewer goods and services and those within the disaster area have more. This is real shared sacrifice in true economic terms, not the phony symbolic shared sacrifice Sandel pontificates about. This is the price system at work.

Are emergency-room doctors vultures? Do ambulance companies earn windfall profits? Are hotel owners greedy? They participate in everyday, routine market transactions in which prices rise in response to special, emergency circumstances. Why aren’t they accused of being evil and immoral?

Because there’s no political profit in it, that’s why. So much for the “new” arguments for anti-price-gouging laws, same as the old.

If Governments Know the Truth, Why Do They Enact Anti-Price-Gouging Laws?

It is obvious that governments know the economic truth about anti-price-gouging laws – otherwise, they would not allow any price increases at all during emergencies. So why do they insist on enacting laws that can only hurt people without helping them?

The answer is depressingly clear. In the environment of big government and absolute democracy, governments exist to further their own power, not to serve the needs of the public at large. Proponents of anti-price-gouging laws are opponents of free markets. These people constitute a special interest. Government serves this special interest by serving its own interest; e.g., the interests of politicians, bureaucrats and government employees.

Politicians observe the protests of anti-free-market groups. They respond with alacrity by promising to restore “fairness” with anti-price-gouging laws. Of course, this will require new laws. The laws will require a new agency or expansion of an existing agency. This will require hiring more employees and more administrators, as well as a bureaucrat to oversee them. Legislators will oversee the budget of the agency. The agency will exert power over all the businesses in the state at any time designated as an “emergency.” Legislators have the privilege of deciding what constitutes an emergency, giving them additional power over those businesses. Politicians will curry favor with the public by posing as a public savior and benefactor during every emergency, rather than being excoriated for taking a “do nothing” stance.

Government is in the business of producing more government, not benefitting the public. It benefits only that subset of the public directly connected with itself. That general rule applies to anti-price-gouging laws as it does to all other aspects of government not strictly within its true, narrow province.

There is no objective crime or bad outcome called “price gouging.” But the laws enacted to prevent it do have objectively bad outcomes and therefore constitute an evil in themselves.

DRI-172 for week of 1-18-15: Consumer Behavior, Risk and Government Regulation

An Access Advertising EconBrief: 

Consumer Behavior, Risk and Government Regulation

The Obama administration has drenched the U.S. economy in a torrent of regulation. It is a mixture of new rules formulated by new regulatory bodies (such as the Consumer Financial Protection Bureau), new rules levied by old, preexisting federal agencies (such as those slapped on bank lending by the Federal Reserve) and old rules newly imposed or enforced with new stringency (such as those emanating from the Department of Transportation and bedeviling the trucking industry).

Some people within the business community are pleased by them, but it is fair to say that most are not. But the President and his subordinates have been unyielding in his insistence that they are not merely desirable but necessary to the health, well-being, vitality and economic growth of America.

Are the people affected by the regulations bad? Do the regulations make them good, or merely constrain their bad behavior? What entitles the particular people designing and implementing the regulations to perform in this capacity – is it their superior motivations or their superior knowledge? That is, are they better people or merely smarter people than those they regulate? The answer can’t be democratic election, since regulators are not elected directly. We are certainly entitled to ask why a President could possibly suppose that some people can effectively regulate an economy of over 300 million people. If they are merely better people, how do we know that their regulatory machinations will succeed, however well-intentioned they are? If they are merely smarter people, how do we know their actions will be directed toward the common good (whatever in the world that might be) and not toward their own betterment, to the exclusion of all else? Apparently, the President must select regulators who are both better people and smarter people than their constituents. Yet government regulators are typically plucked from comparative anonymity rather than from the firmament of public visibility.

Of all American research organizations, the Cato Institute has the longest history of examining government regulation. Recent Cato publications help rebut the longstanding presumptions in favor of regulation.

The FDA Graciously Unchains the American Consumer

In “The Rise of the Empowered Consumer” (Regulation, Winter 2014-2015, pp.34-41, Cato Institute), author Lewis A. Grossman recounts the Food and Drug Administration’s (FDA) policy evolution beginning in the mid-1960s. He notes that “Jane, a [hypothetical] typical consumer in 1966… had relatively few choices” across a wide range of food-products like “milk, cheese, bread and jam” because FDA’s “identity standards allowed little variation.” In other words, the government determined what kinds of products producers were allowed to legally produce and sell to consumers. “Food labels contained barely any useful information. There were no “Nutrition Facts” panels. The labeling of many foods did not even include a statement of ingredients. Nutrient content descriptors were rare; indeed, the FDA prohibited any reference whatever to cholesterol. Claims regarding foods’ usefulness in preventing disease were also virtually absent from labels; the FDA considered any such statement to render the product an unapproved – and thus illegal – drug.”

Younger readers will find the quoted passage startling; they have probably assumed that ingredient and nutrient-content labels were forced on sellers over their strenuous objections by noble and altruistic government regulators.

Similar constraints bound Jane should she have felt curiosity about vitamins, minerals or health supplements. The types and composition of such products were severely limited and their claims and advertising were even more severely limited by the FDA. Over-the-counter medications were equally limited – few in number and puny in their effectiveness against such infirmities as “seasonal allergies… acid indigestion…yeast infection[s] or severe diarrhea.” Her primary alternative for treatment was a doctor’s visit to obtain a prescription, which included directions for use but no further enlightening information about the therapeutic agent. Not only was there no Internet, copies of the Physicians’ Desk Reference were unavailable in bookstores. Advertising of prescription medicines was strictly forbidden by the FDA outside of professional publications like the Journal of the American Medical Association.

Food substances and drugs required FDA approval. The approval process might as well have been conducted in Los Alamos under FBI guard as far as Jane was concerned. Even terminally ill patients were hardly ever allowed access to experimental drugs and treatments.

From today’s perspective, it appears that the position of consumers vis-à-vis the federal government in these markets was that of a citizen in a totalitarian state. The government controlled production and sale; it controlled the flow of information; it even controlled the life-and-death choices of the citizenry, albeit with benevolent intent. (But what dictatorship – even the most savage in history – has failed to reaffirm the benevolence of its intentions?) What led to this situation in a country often advertised as the freest on earth?

In the late 19th and early 20th centuries, various incidents of alleged consumer fraud and the publicity given them by various muckraking authors led Progressive administrations led by Theodore Roosevelt, William Howard Taft and Woodrow Wilson to launch federal-government consumer regulation. The FDA was the flagship creation of this movement, the outcome of what Grossman called a “war against quackery.”

Students of regulation observe this common denominator. Behind every regulatory agency there is a regulatory movement; behind every movement there is an “origin story;” behind every story there are incidents of abuse. And upon investigation, these abuses invariably prove either false or wildly exaggerated. But even had they been meticulously documented, they would still not substantiate the claims made for them and not justify the regulatory actions taken in response.

Fraud was illegal throughout the 19th and 20th century and earlier. Competitive markets punish producers who fail to satisfy consumers by putting the producers out of business. Limiting the choices of producers and consumers harms consumers without providing compensating benefits. The only justification for FDA regulation of the type provided for the first half of the 20th century was that government regulators were omniscient, noble and efficient while consumers were dumbbells. That is putting it baldly but it is hardly an overstatement. After all, consider the situation that exists today.

Plentiful varieties of products exist for consumers to pick from. They exist because consumers want them to exist, not because the FDA decreed their existence. Over-the-counter medications are plentiful and effective. The FDA tries to regulate their uses, as it does for prescription medications, but thankfully doctors can choose from a plethora of “off-label” uses. Nutrient and ingredient labels inform the consumer’s quest to self-medicate such widespread ailments as Type II diabetes, which spread to near-epidemic status but is now being controlled thanks to rejection of the diet that the government promoted for decades and embrace of a diet that the government condemned as unsafe. Doctors and pharmacists discuss medications and supplements with patients and provide information about ingredients, side effects and drug interactions. And patients are finally rising in rebellion against the tyranny of FDA drug approval and the pretense of compassion exhibited by the agency’s “compassionate use” drug-approval policy for patients facing life-threatening diseases.

Grossman contrasts the totalitarian policies of yesteryear with the comparative freedom of today in polite academic language. “The FDA treated Jane’s… cohort…as passive, trusting and ignorant consumers. By comparison, [today’s consumer] has unmediated [Grossman means free] access to many more products and to much more information about those products. Moreover, modern consumers have acquired significant influence over the regulation of food and drugs and have generally exercised that influence in ways calculated to maximize their choice.”

Similarly, he explains the transition away from totalitarianism to today’s freedom in hedged terms. To be sure, the FDA gave up much of its power over producers and consumers kicking and screaming; consumers had to take all the things listed above rather than receive them as the gifts of a generous FDA. Nevertheless, Grossman insists that consumers’ distrust of the word “corporation” is so profound that they believe that the FDA exerts some sort of countervailing authority to ensure “the basic safety of products and the accuracy and completeness of labeling and advertising.” This concerning an agency that fought labeling and advertising tooth and claw! As to safety, Grossman makes the further caveat that consumers “prefer that government allow consumers to make their own decisions regarding what to put in their bodies…except in cases in which risk very clearly outweighs benefit” [emphasis added]. That implies that consumers believe that the FDA has some special competence to assess risks and benefits to individuals, which completely contradicts the principle that individuals should be free to make their own choices.

Since Grossman clearly treats consumer safety and risk as a special case of some sort, it is worth investigating this issue at special length. We do so below.

Government Regulation of Cigarette Smoking

For many years, individual cigarette smokers sued cigarette companies under the product-liability laws. They claimed that cigarettes “gave them cancer,” that the cigarette companies knew it and that consumers didn’t and that the companies were liable to selling dangerous products to the public.

The consumers got nowhere.

To this day, an urban legend persists that this run of legal success was owed to deep financial pockets and fancy legal footwork. That is nonsense. As the leading economic expert on risk (and the longtime cigarette controversy), W. Kip Viscusi, concluded in Smoke-Filled Rooms: A Postmortem on the Tobacco Deal, “the basic fact is that when cases reached the jury, the jurors consistently concluded that the risks of cigarettes were well-known and voluntarily incurred.”

In the early 1990s, all this changed. States sued the tobacco companies for medical costs incurred by government due to cigarette smoking. The suits never reached trial. The tobacco companies settled with four states; a Master Settlement Agreement applied to remaining states. The aggregate settlement amount was $243 billion, which in the days before the Great Recession, the Obama administration and the Bernanke Federal Reserve was a lot of money. (To be sure, a chunk of this money was gobbled up by legal fees; the usual product-liability portion is one-third of the settlement, but gag orders have hampered complete release of information on lawyers’ fees in these cases.)

However, the states were not satisfied with this product-liability bonanza. They increased existing excise taxes on cigarettes. In “Cigarette Taxes and Smoking,” Regulation (Winter 2014-2015, pp. 42-46, Cato Institute), authors Kevin Callison and Robert Kaestner ascribe these tax increases to “the hypothesis… that higher cigarette taxes save a substantial number of lives and reduce health-care costs by reducing smoking, [which] is central to the argument in support of regulatory control of cigarettes through higher cigarette taxes.”

Callison and Kaestner cite research from anti-smoking organizations and comments to the FDA that purport to find price elasticities of demand for cigarettes of between -0.3 and -0.7 percent, with the lower figure applying to adults and the higher to adolescents. (The words “lower” and “higher” refer to the absolute, not algebraic, value of the elasticities.) Price elasticity of demand is defined as the percentage change in quantity demanded associated with a 1 percent change in price. Thus, a 1% increase in price would cause quantity demanded to fall by between 0.3% and 0.7% according to these estimates.

The problem with these estimates is that they were based on research done decades ago, when smoking rates were much higher. The authors estimate that today’s smokers are mostly the young and the poorly educated. Their price elasticities are very, very low. Higher cigarette taxes have only a miniscule effect on consumption of cigarettes. They do not reduce smoking to any significant extent. Thus, they do not save on health-care costs.

They serve only to fatten the coffers of state governments. Cigarette taxes today play the role played by the infamous tax on salt levied by French kings before the French Revolution. When the tax goes up, the effective price paid by the consumer goes up. When consumption falls by a much smaller percentage than the price increase, tax revenues rise. Both the cigarette-tax increase of today and the salt-tax increases of the 17th and 18th century were big revenue-raisers.

In the 1990s, tobacco companies were excoriated as devils. Today, though, several of the lawyers who sued the tobacco companies are either in jail for fraud, under criminal accusation or dead under questionable circumstances. And the state governments who “regulate” the tobacco companies by taxing them are now revealed as merely in it for the money. They have no interest in discouraging smoking, since it would cut into their profits if smoking were to fall too much. State governments want smoking to remain price-inelastic so that they can continue to raise more revenue by raising taxes on cigarettes.

 

Can Good Intentions Really Be All That Bad? The Cost of Federal-Government Regulation

The old saying “You can’t blame me for trying” suggests that there is no harm in trying to make things better. The economic principle of opportunity cost reminds us that the use of resources for one purpose – in this case, the various ostensibly benevolent and beneficent purposes of regulation – denies the benefits of using them for something else. So how costly is that?

In “A Slow-Motion Collapse” (Regulation, Winter 2014-2015, pp. 12-15, Cato Institute), author Pierre Lemieux cites several studies that attempted to quantify the costs of government regulation. The most comprehensive of these was by academic economists John Dawson and John Seater, who used variations in the annual Code of Federal Regulations as their index for regulatory change. In 1949, the CFR had 19,335 pages; in 2005, this total has risen to 134,261 pages, a seven-fold increase in six-plus decades. (Remember, this includes federal regulation only, excluding state and local government regulation, which might triple that total.)

Naturally, proponents of regulation blandly assert that the growth of real income (also roughly seven-fold over the same period) requires larger government, hence more regulation, to keep pace. This nebulous generalization collapses upon close scrutiny. Freedom and free markets naturally result in more complex forms of goods, services and social interactions, but if regulatory constraints “keep pace” this will restrain the very benefits that freedom creates. The very purpose of freedom itself will be vitiated. We are back at square one, asking the question: What gives regulation the right and the competence to make that sort of decision?

Dawson and Seater developed an econometric model to estimate the size of the bite taken by regulation from economic growth. Their estimate was that it has reduced economic growth on average by about 2 percentage points per year. This is a huge reduction. If we were to apply it to the 2011 GDP, it would work as follows: Starting in 1949, had all subsequent regulation not happened, 2011 GDP would have been 39 trillion dollars higher, or about 54 trillion. As Lemieux put it: “The average American (man, woman and child) would now have about $125,000 more per year to spend, which amounts to more than three times [current] GDP per capita. If this is not an economic collapse, what is?”

Lemieux points out that, while this estimate may strain the credulity of some, it also may actually incorporate the effects of state and local regulation, even though the model itself did not include them in its index. That is because it is reasonable to expect a statistical correlation between the three forms of regulation. When federal regulation rises, it often does so in ways that require corresponding matching or complementary state and local actions. Thus, those forms of regulation are hidden in the model to some considerable degree.

Lemieux also points to Europe, where regulation is even more onerous than in the U.S. – and growth has been even more constipated. We can take this reasoning even further by bringing in the recent example of less-developed countries. The Asian Tigers experienced rapid growth when they espoused market-oriented economics; could their relative lack of regulation supplement this economic-development success story? India and mainland China turned their economies around when they turned away from socialism and Communism, respectively; regulation still hamstrings India while China is dichotomized into a relatively autonomous small-scale competitive sector and a heavily regulated and planned government controlled big-business economy. Signs point to a recent Chinese growth dip tied to the bursting of a bubble created by easy money and credit granted to the regulated sector.

The price tag for regulation is eye-popping. It is long past time to ask ourselves why we are stuck with this lemon.

Government Regulation as Wish-Fulfillment

For millennia, children have cultivated the dream fantasies of magical figures that make their wishes come true. These apparently satisfy a deep-seated longing for security and fulfillment. Freud referred to this need as “wish fulfillment.” Although Freudian psychology has long ago been discredited, the term retains its usefulness.

When we grow into adulthood, we do not shed our childish longings; they merely change form. In the 20th century, motion pictures became the dominant art form in the Western world because they served as fairy tales for adults by providing alternative versions of reality that were preferable to daily life.

When asked by pollsters to list or confirm the functions regulation should perform, citizens repeatedly compose “wish lists” that are either platitudes or, alternatively, duplicate the functions actually approximated by competitive markets. It seems even more significant that researchers and policymakers do exactly the same thing. Returning to Lewis Grossman’s evaluation of the public’s view of FDA: “Americans’ distrust of major institutions has led them to the following position: On the one hand, they believe the FDA has an important role to play in ensuring the basic safety of products and the accuracy and completeness of labeling and advertising. On the other hand, they generally do not want the FDA to inhibit the transmission of truthful information from manufacturers to consumers, and – except in cases in which risk very clearly outweighs benefit – they prefer that the government allow consumers to make their own decisions regarding what to put in their own bodies.”

This is a masterpiece of self-contradiction. Just exactly what is an “important role to play,” anyway? Allowing an agency that previously denied the right to label and advertise to play any role is playing with fire; it means that genuine consumer advocates have to fight a constant battle with the government to hold onto the territory they have won. If consumers really don’t want the FDA to “inhibit the transmission of truthful information from manufacturers to consumers,” they should abolish the FDA, because free markets do the job consumers want done by definitionand the laws alreadyprohibit fraud and deception.

The real whopper in Grossman’s summary is the caveat about risk and benefit. Government agencies in general and the FDA in particular have traditionally shunned cost/benefit and risk/benefit analysis like the plague; when they have attempted it they have done it badly. Just exactly who is going to decide when risk “very clearly” outweighs benefit in a regulatory context, then? Grossman, a professional policy analyst who should know better, is treating the FDA exactly as the general public does. He is assuming that a government agency is a wish-fulfillment entity that will do exactly what he wants done – or, in this case, what he claims the public wants done – rather than what it actually does.

Every member of the general public would scornfully deny that he or she believes in a man called Santa Claus who lives at the North Pole and flies around the world on Christmas Eve distributing presents to children. But for an apparent majority of the public, government in general and regulation in particular plays a similar role because people ascribe quasi-magical powers to them to fulfill psychological needs. For these people, it might be more apropos to view government as “Mommy” or “Daddy” because of the strength and dependent nature of the relationship.

Can Government Control Consumer Risk? The Emerging Scientific Answer: No 

The comments of Grossman, assorted researchers and countless other commentators and onlookers over the years imply that government regulation is supposed to act as a sort of stern, but benevolent parent, protecting us from our worst impulses by regulating the risks we take. This is reflected not only in cigarette taxes but also in the draconian warnings on the cigarette packages and in numerous other measures taken by regulators. Mandatory seat belt laws, adopted by state legislatures in 49 states since the mid-1980s at the urging of the federal government, promised the near-elimination of automobile fatalities. Government bureaucracies like Occupational Safety and Health Administration have covered the workplace with a raft of safety regulations. The Consumer Product Safety Commission presides with an eagle eye over the safety of the products that fill our market baskets.

In 1975, University of Chicago economist Sam Peltzman published a landmark study in the Journal of Political Economy. In it, Peltzman revealed that the various devices and measures mandated by government and introduced by the big auto companies in the 1960s had not actually produced statistically significant improvements in safety, as measured by auto fatalities and injuries. In particular, use of the new three-point seat belts seemed to show a slight improvement in driver fatalities that was more than offset by a rise in fatalities to others – pedestrians, cyclists and possibly occupants of victim vehicles. Over the years, subsequent research confirmed Peltzman’s results so repeatedly that former Chairman of the Council of Economic Advisors’ N. Gregory Mankiw dubbed this the “Peltzman Effect.”

A similar kind of result emerged throughout the social sciences. Innovations in safety continually failed to produce the kind of safety results that experts anticipated and predicted, often failing to provide any improved safety performance at all. It seems that people respond to improved safety by taking more risk, thwarting the expectations of the experts. Needless to say, this same logic applies also to rules passed by government to force people to behave more safely. People simply thwart the rules by finding ways to take risk outside the rules. When forced to wear seat belts, for example, they drive less carefully. Instead of endangering only themselves by going beltless, now they endanger others, too.

Today, this principle is well-established in scientific circles. It is called risk compensation. The idea that people strike to maintain, or “purchase,” a particular level of risk and hold it constant in the face of outside efforts to change it is called risk homeostasis.

These concepts make the entire project of government regulation of consumer risk absurd and counterproductive. Previously it was merely wrong in principle, an abuse of human freedom. Now it is also wrong in practice because it cannot possibly work.

Dropping the Façade: the Reality of Government Regulation

If the results of government regulation do not comport with its stated purposes, what are its actual purposes? Are the politicians, bureaucrats and employees who comprise the legislative and executive branches and the regulatory establishment really unconscious of the effects of regulation? No, for the most part the beneficiaries of regulation are all too cynically aware of the façade that covers it.

Politicians support regulation to court votes from the government-dependent segment of the voting public and to avoid being pilloried as killers and haters or – worst of all – a “tool of the big corporations.” Bureaucrats tacitly do the bidding of politicians in their role as administrators. In return, politicians do the bidding of bureaucrats by increasing their budgets and staffs. Employees vote for politicians who support regulation; in return, politicians vote to increase budgets. Employees follow the orders of bureaucrats; in return, bureaucrats hire bigger staffs that earn them bigger salaries.

This self-reinforcing and self-supporting network constitutes the metastatic cancer of big government. The purpose of regulation is not to benefit the public. It is to milk the public for the benefit of politicians, bureaucrats and government employees. Regulation drains resources away from and hamstrings the productive private economy.

Even now, as we speak, this process – aided, abetted and drastically accelerated by rapid money creation – is bringing down the economies of the Western world around our ears by simultaneously wreaking havoc on the monetary order with easy money, burdening the financial sector with debt and eviscerating the real economy with regulations that steadily erode its productive potential.

DRI-177 for week of 1-11-15: ‘Je Suis Charlie?’ Not If We Use Our Brains

An Access Advertising EconBrief: 

‘Je Suis Charlie?’ Not If We Use Our Brains

The terrorist assault on the Paris satirical magazine “Charlie Hebdo” was nothing if not predictable. For over a decade, European depictions of the prophet Mohammed have met with murderous response. Earlier, in 1989, the novelist Salmon Rushdie’s Satanic Verses earned one of history’s worst reviews – a fatwa calling on every Muslim to kill its author. The book’s translator was killed in the wake of that decree. In 2004, a television film by Dutch director Theo van Gogh criticized the Islamic religion – van Gogh was killed because of it. Prior to this assault, Charlie Hebdo cartoonists had been attacked in 2011 for daring to satirize Islam.

By recent standards, the human toll of this terrorist attack was relatively modest – the 12 magazine employees and police who were killed by the raid, the four additional hostages and one policewoman who died in the aftermath and, of course, the terrorists who died in the attack or were later hunted down and killed. Compare that to the butcher’s bill of 2,000 villagers, mostly children and elderly, who died at the hands of the Boko Haram terrorists in Nigeria at about the same time. Apparently, the public visibility of a satirical literary magazine and the sacred cause of free speech invoked in its name – as if mass murder alone were not enough to provoke outrage – give Charlie Hebdo pride of place.

It seems we are fated to react emotively rather than cerebrally to these events. Hence the professions of shock and disbelief in official circles and the public resort to the finger-waving slogan “Je Suis Charlie” (I am Charlie). Displays of false bravado, after the fact, ignore the issues raised by the attack and – even more to the point – those raised by the actions that led to it. The ballyhooed rally attended by officials of numerous governments, from which a U.S. representative was conspicuously absent, will prove equally ineffectual as a counterweight to terrorism.

Is Free Speech a Free Good?

By all accounts, the issue of “Charlie Hebdo” objected to by terrorists and Muslims fundamentalists in general treated the prophet Mohammed irreverently. The attitude taken by the publishers was, and is, that religion deserves to be treated with healthy disrespect. As it happens, this has proved controversial, even after the terrorist attack.

In some quarters, not merely theological, it has been suggested that the magazine’s producers brought their fate on themselves by the coarseness and insensitivity of their actions. In turn, these suggestions have provoked indignant rebuttals that bad taste by cartoonists did not merit a death sentence. The one thread running through mainstream reaction to the attack has been that our embrace of free speech demands a defense of the magazine and its actions.

But is that true? Or, more precisely, in what sense or to what extent is it true?

The word “free” means different things in political philosophy and economics. A person is free to act in the philosophic sense if he is not subject to external constraint. A free good differs from an economic good in being costless; that is, in lacking a foregone alternative in its production or creation. “Free speech” is a political concept, not an economic one. All too often, alas, the lack of constraint on speech has been conflated with the ability to escape all adverse consequences of speech. In other words, the freedom to speak as one chooses has been confused with the power to control reactions to one’s speech.

A movie star is free to publicly disparage the foreign-policy views of her fan base, but if her free-speech exercise causes fans to shun her movies, she lacks the power to compel them. An employee is free to publicly ascribe his company’s falling net income to the CEO’s incompetence, but his right to free speech won’t secure his job thereafter.

Strictly speaking, the Charlie Hebdo attack does not raise any questions of free speech. There is no doubt of the magazine’s right – or rather, its employees’ right – to formulate and publish opinions that are uncongenial and even hateful to others. The question is: What follows from the probable reactions to the exercise of that free speech right?

In this case, it was a foregone conclusion that the offending issue would bring physical violence down upon the magazine. Indeed, employees had already been attacked after publication of previous issues. Does this mean that the magazine should not have published the offending issue?

No. But it does mean that the terrorist attack should have been treated like any other cost of doing business – factored into the decision-making of the firm. That implies that the firm should have been responsible for financing its own security against terrorist attack rather than relying on government protection. Press reports say that “light police protection” was afforded the magazine in response to previous attacks on employees and threats of future attacks. Clearly, this level of official protection was inadequate. What was needed was sufficient force to cope with terrorists trained in military methods and armed accordingly. And the security setup should have been designed to kill terrorists before they could reach the employees of the magazine.

Requiring Charlie Hebdo to pay its own security bill would have changed the terms of the decision faced by the magazine. What commentators glorify as “free speech” is really a literary product – let us call it “satirical humor” – created to serve the economic purposes of pleasing readers and attracting advertisers. Only if the potential marginal revenue from this product exceeds the marginal cost of production – which should properly include the cost of security - will the issue actually be created and published.

Does this seem improbable? It shouldn’t. The post-attack issue of Charlie Hebdo has already sold out two huge press runs totaling some five million copies, so the possibility that this kind of satire could actually be self-financing is not so unlikely after all. But whether improbable or not, these are the conditions under which it is appropriate to wave a red flag in front of bullheaded terrorists. They are economic, not moral or philosophical or political, criteria.

The Deterrent Effect

Actually, the economic arguments are not the only ones in support of this stance, even though they are decisive on their own merits. Terrorists have one thing in common with solitary mass shooters: they are undeterred by the threat of death. Both kinds of murderers are prepared to die. They are afraid only of failure. Thus, thwarting terrorists by killing them before they can strike not only spares immediate victims but also is the only potential deterrent to future terrorism.

The actual raid on Charlie Hebdo was a success from a terrorist perspective, since it achieved its tactical goals and the killers escaped the scene to claim credit for their crime. The perpetrators were eventually caught, to be sure, but that is something that terrorists accept and plan for. Each tactical success and the shocked, anguished reaction it generates wins new converts to the cause – that is what gives terrorism its name.

When the political right of free speech is exercised in the form of satiric magazine content, it becomes an economic good – and at that point it is subject to judgment on those terms. If it creates enough value to offset the costs of its production, it is a good thing. If not, it is better foregone.

Je Suis Charlie? Non

Readers of this space know that businesses are entities used by people as intermediaries; they do things for us that we cannot do for ourselves by ourselves. But they do not exist as living entities; all actions affecting and effected by businesses impact individual human beings. “Je Suis Charlie” has it backwards; the reverse is true, at least to the extent that the speaker is a consumer, employee or owner of Charlie Hebdo.

By making Charlie Hebdo’s costs of security part of its cost of doing business, we ensure that the people who pay the costs are the same ones as those who reap the benefits from its operations – specifically, from its exercise of free speech. If the money is raised from magazine sales, then consumers are paying the freight – which means in turn that the benefits they get from the satire exceed the costs of paying to protect the authors. Alternatively, maybe a financial angel considers the artistic principle worth defending with his cash – in which case, he benefits on net balance from subsidizing the firm’s security bill.

But there is no case for forcing uninvolved parties, who may be unaware of Charlie Hebdo’s existence or who may even disapprove of its activities, to pony up tax money on the firm’s behalf.

The mere fact that it is possible to poke fun at the prophet Mohammed does not make it a sacred obligation to do so, nor is every exercise of that right defensible in economic terms. If the authors have to rely on people who do not benefit from the value created by their exercise of free-speech rights to protect them from the predictable consequences of their own actions, then they are like children who make mischief, then seek the protection of their parents.

Of course, security tight enough to withstand a terrorist onslaught is expensive. But clever, incisive satire can attract a large audience and finance the fixed costs of a secure facility. Moreover, a single wealthy donor can substitute for a large subscriber base or red-hot newsstand sales. Throughout history, patrons have subsidized the cause of art straightforwardly – now we may have reached the point where the martial arts are called upon to sustain the pacific ones.

The point, then, is that free political speech is not free in the economic sense. Free political speech is an economic product that has costs and benefits, as do all other economic goods, and is judged by their comparison.

Is Protecting Charlie Hebdo a National-Defense Obligation? 

Every introductory economics textbook informs its readers that provision of national defense if properly a function of the national government. This job cannot be profitably undertaken by private business because it is a public good. Public goods fail the test of exclusion; private businesses cannot produce them because they cannot collect the money to pay the costs of production. By producing national defense for one citizen, a private firm or firms must necessarily provide it for all, thus enabling people to refuse to pay for it once it is produced and deployed. (This type of refusal is called free riding behavior.) At any rate, that is the standard argument used to justify monopolization of national defense by government.

But that argument does not apply in this case. We are not proposing to defend an entire nation against sudden, unprovoked attack. Instead, one business has placed itself in danger through its own deliberate actions and now provision must be made for its safety. There is no prospect of free riding behavior to discourage, only the matter of how best to provide the necessary security.

Private provision is efficient in the economic sense because it encourages the business, which is in the best position to gauge the benefits and costs, to indulge in the risky behavior only if the likely benefits exceed the costs of security. It is also efficient in the practical sense, since a federal government has a bad track record in combatting terrorism and is willing to provide security to individuals only in the form of witness-security type programs. (It appears that Salman Rushdie has survived for years under this type of regime.) Local police are typically willing to provide security only to witnesses in court proceedings. Thus, private security is the obvious choice not only be default but by preference.

As a useful comparison, compare this to a situation in which (say) a newspaper prints an editorial critical of (say) North Korea, who launches a missile attack on the U.S. Now the nation is under attack without provocation by a foreign government. This is a true act of war - a legitimate exercise of the national-defense function of the federal government.

Time to Re-Start the War on Terror?

A popular response to the Charlie Hebdo assault has been to call for resumption of hostilities in the War on Terror. The U.S.’s protracted military withdrawal from Iraq and Afghanistan had essentially wound that war to a close. As with every war, we are left holding the bag of accreted accumulations of federal-government power inflicted on the citizenry on the pretext of wartime necessity – visualize yourself standing shoeless in an airport check-in line. Now the clarion calls to arms are sounding once more.

It is axiomatic that every failure of government leads to a call for … more and bigger government. In no other sphere of human conduct is failure rewarded to reflexively and lavishly. Here, the Paris police failed miserably to protect the staff of Charlie Hebdo even though it was obvious to the world that an attack was on the cards. We have also learned that the French federal government dropped its surveillance of the presumed perpetrators. (The standard excuse of budget cuts is advanced.) So what do commentators demand? An escalation of the size and scope of government intervention, of course – as if government intervention itself were a given and we were arguing only about its nature and magnitude.

In the largest terrorist incident in recent human history, the 9/11 airline-crash-suicide assault on the World Trade Center towers in New York City, we have a case study in the relative effectiveness of government and the private sector. At the highest level of government, different agencies within the federal government had advance knowledge of the attacks (or their likelihood) but could not or would not coordinate their knowledge to prevent them. Once the attacks were underway, a military establishment massive enough to patrol the world, devastate the world with nuclear bombs and man a defensive cordon around the United States could nevertheless not stop three commercial airliners piloted by amateur foreigners from flying halfway across the U.S. and (1) twice crashing into two of the world’s tallest buildings in our most populous city, then (2) crashing into and damaging the military’s own Pentagon headquarters. By contrast, a few civilian passengers on the fourth hijacked airliner, who were completely unarmed, unwarned and untrained, nevertheless managed to overcome the armed hijackers and nearly gain control of the plane before their captors deliberately crashed it short of its target. This unorganized handful of private citizens succeeded where their multibillion-dollar military and security establishment failed – they prevented the tactical attainment of the hijackers’ goal, which was the destruction of the White House.

The history of terrorism relates one tactical success after another owing to the incompetence of government. Either the terrorists succeed or they are thwarted by their own incapability, but they are never stopped in their tracks by government. Yet with each success, promoters of the War on Terror raise the interventionist stakes by proposing ever more and greater government as the antidote.

Having lost the element of secrecy associated with the NSA phone-surveillance collation of phone conversations, government now recognizes the need to “use it or lose it.” The U.S. government will either have to prove both the safety and efficacy of NSA surveillance or see it go away. The national-security establishment cannot afford to let the Charlie Hebdo crisis go to waste; it must use it as the pretext for NSA surveillance.

Public commentators play a role formerly labeled that of “useful idiot” in the days of Communist infiltration of American society. They insist that we can no longer afford to pretend that the NSA is a threat to liberty and must now acknowledge its effectiveness as a terror-fighting tool. In fact, it has no demonstrated effectiveness whatever; it is merely assumed to be effective because government intervention is the knee-jerk, default response to all problems in the wish-fulfillment world of public commentary.

The word “irony” is hardly adequate to this occasion. If there was ever a time not to rely on NSA-type macro surveillance tools, the Charlie Hebdo case is it.

The Needle and the Haystack 

Consider the rationale behind the NSA program of surveillance. It is designed to determine which terrorists will act and when and where they will strike. To learn this information, the U.S. government monitors the aggregate phone traffic of the United States – not listening to individual conversations but merely checking ISP addresses against each other. A useful metaphor for this technique would be the needle in the haystack. In order to locate the rare terrorist event (the “needle”), the government sifts through billions of unrelated (non-terrorist) conversations (the “haystack”).

The absurdity of reliance on this technique in the Charlie Hebdo case is clear. We already possess the needle or, rather, two-thirds of it. Even before publication of the offending issue, it was a foregone conclusion that terrorists would strike against Charlie Hebdo, just as they intended to do against Salmon Rushdie, just as they did against Theo van Gogh and just as they already did against Charlie Hebdo for less provocative behavior in the past. True, we didn’t know who would attack, but knowing the target and its fixed location was more than enough. In any case, even if the identities of the culprits had been known in advance, it might not have been sufficient to forestall the attack. What was called for was an ironclad defense – either an impenetrable security perimeter to discourage an attack or force deadly enough to kill the terrorists before they reached Charlie Hebdo itself.

Refusal to deploy this security and relying instead on some sort of NSA-type surveillance to detect the threat is a ridiculous strategy. It is tantamount to possessing the needle, but hiding it in a field the size of a large county and then ordering an army of government bureaucrats wielding magnets to walk around the field until they got it back via magnetic attraction.

“Defending” Free Speech But Losing Freedom

Inefficiency in fighting terrorism is bad enough. But that is just one side of the false coin minted by rejuvenated terror warriors who call for vesting anti-terrorism powers in comprehensive surveillance by central governments. The citizenry must take it on faith that the government will not use its surveillance powers to acquire unauthorized information about law-abiding citizens and will not misuse any such information that it does acquire.

Readers will indignantly interject that safeguards are in place to prevent abuse of surveillance powers. The problem is that the same government that does the surveillance also administers the safeguards. In practice, this guarantees that the safeguards are not safe and do not guard. The actual experience of the NSA to date, based on documented testimony, proves that abuses have already occurred. Government is the locus classicus of the old saw that insanity is defined by the practice of making the same mistakes over and over again but expecting the outcome to be different.

Free markets incorporate learning from mistakes because the profit motive creates a positive feedback loop. No such mechanism exists for government – elections provide no definite corrective link between specific errors and electoral penalty and, in any case, the correction comes too late to be of any consistent value. An election is one single discrete event and there are hundreds or thousands of political decisions that require feedback from the public. Markets provide feedback on virtually every relevant decision and action; governments seldom do. Markets work; governments fail. At most, governments and the public communicate through the filter of the news media, which distorts the flow of information and the resulting decision process.

Thus, we are on the verge of losing freedom on the pretext of defending free speech.