DRI-332 for week of 6-16-13: What Lies Ahead for Us?

An Access Advertising EconBrief:

What Lies Ahead for Us?

Last month, Federal Reserve Chairman Ben Bernanke announced that the Fed Open Market Committee is contemplating an end to the $85 billion program of bond purchases that has been dubbed “Quantitative Easing (QE).” The announcement was hedged over with assurances that the denouement would come only gradually, when the Fed was satisfied that general economic conditions had improved sufficiently to make QE unnecessary. Nonetheless, the announcement produced a flurry of speculation about the eventual date and timing of the Fed’s exit.

The Fed’s monetary policy since the financial crisis of 2008 and the stimulus package of 2009 is unique in U.S. economic history. Indeed, its repercussions have resounded throughout the world. Its motives and means are both poorly understood and hotly debated. Shedding light on these matters will help us face the future. A question-and-answer format seems appropriate to reflect the mood of uncertainty, anxiety and fear that pervades today’s climate.

What was the motivation for QE, anyway?

The stated motivation was to provide economic stimulus. The nature of the stimulus was ambiguously defined. Sometimes it was to increase the rate of inflation, which was supposedly too low. Sometimes it was to stimulate investment by holding interest rates low. The idea here was that, since the Fed was buying bonds issued by the Treasury, the Fed could take advantage of the inverse relationship between a bond’s price and its yield to maturity by bidding up T-bond prices, which automatically has the effect of bidding down their yields. Because $85 billion worth of Treasury bonds comprise such a large quarterly chunk of the overall bond market, this will depress bond yields for quite a while after the T-bond auction. Finally, the last stimulative feature of the policy was ostensibly to indirectly stimulate purchase of stocks by driving down the yields on fixed-income assets like bonds. With nowhere to go except stocks, investors would bid up stock prices, thus increasing the net worth of equity investors, who comprise some 40-50% of the population.

How was driving up stock prices supposed to stimulate the economy?

The ostensible idea was to make a large segment of Americans feel wealthier. This should cause them to spend more money. This sizable increase in overall expenditures should cause secondary increases in income and employment through the economic process known as the “multiplier effect.” This would end the recession by reducing unemployment and luring Americans back into the labor force.

How did the plan work out?

Inflation didn’t increase much, if at all. Neither did investment, particularly when viewed in net terms that exclude investments to replace deteriorated capital stock. Stock prices certainly rose, although the consumption increases that followed have remained modest.

So the plan was a failure?

That would be a reasonable assessment if, and only if, the stated goal(s) of QE was (were) the real goal(s). But that wasn’t true; the real goal QE was to reinforce and magnify the Fed’s overall “zero interest-rate policy,” called ZIRP for short. As long as it accomplished that goal, any economic stimulus produced was a bonus. And on that score, QE succeeded very well indeed. That is why it was extended and why the Fed is stretching it out as long as it can.

Wait a minute – I thought you just said that even though interest rates have remained low, investment has not increased. Why, then, is the Fed so hot to keep interest rates low? I always heard that the whole idea behind Fed policies to peg interest rates at low levels was to stimulate investment. Why is the Fed busting our chops to follow this policy when it isn’t working?

You heard right. That was, and still is, the simple Keynesian model taught to freshman and sophomore economics students in college. The problem is that it never did work particularly well and now works worse than ever. In fact, that policy is actually the proximate cause of the business cycle as we have traditionally known it.

But even though the Fed gives lip service to this outdated textbook concept, the real reason it wants to keep interest rates low is financial. If the Fed allowed interest rates to rise – as they would certainly do if allowed to find their own level in a free capital market – the rise in market interest rates would force the federal government to finance its gargantuan current and future budget deficits by selling bonds that paid much higher interest rates to bondholders. And that would drive the percentage of the federal government budget devoted to interest payments through the roof. Little would be left for the other spending that funds big government as we know it – the many cabinet departments and myriad regulatory and welfare agencies.

Even if you don’t find this argument compelling – and you can bet it compels anybody who gets a paycheck from the federal government – it should be obvious to everybody that the Fed isn’t really trying that hard to apply traditional stimulative monetary policy. After all, stimulative monetary policy works by putting money in public hands – allowing banks to make loans and consumer spending to magnify the multiplier effects of the loan expenditures. But Bernanke lobbied for a change in the law that allowed the Fed to pay interest to banks on their excess reserves.  When the Fed enforces ZIRP by buying bonds in the secondary market, it pays banks for them by crediting the banks’ reserve accounts at the Fed. The interest payments mean that the banks don’t have to risk making loans with that money; they can just hold it in excess reserves and earn easy profits. This is the reason why the Fed’s money creation has not caused runaway inflation, as government money creation always did in the past. You can’t have all or most prices rising at once unless the newly created money is actually chasing goods and services, which is not happening here.

But the mere fact that hyperinflation hasn’t struck doesn’t mean that the all-clear has been sounded. And it doesn’t mean that we’re not being gored by the horns of a debt dilemma. We certainly are.

Being gored in the bowels of a storm cellar is a pretty uncomfortable metaphor. You make it sound as though we have reached a critical economic crisis point.

We have. Every well-known civilizational collapse and revolution, from ancient Rome to the present day, has experienced a financial crisis resembling ours. The formula is familiar. The government has overspent and resorted to money creation as a desperate expedient to finance itself. This has papered over the problem but ended up making things even worse. For example, the French support for the American colonies against Great Britain was the straw that broke the bank of their monarchy, fomenting the French Revolution. The Romanovs downfall occurred despite Russia’s increasing rate of economic growth in the late 1800s and because of financial profligacy and war – two causes that should be familiar to us.

It sounds as though government can no longer use the tools of fiscal and monetary policy to stimulate the economy.

It never could. After all, the advent of Keynesian economics after 1950 did not usher in unprecedented, uninterrupted world prosperity. We had recessions and depressions before Keynes wrote his General Theory in 1936 and have had them since then, too. And Keynes’s conclusions were anticipated by other economists, such as the American economists Foster and Catchings in the late 1920s. F.A. Hayek wrote a lengthy article refuting their arguments in 1927 and he later opposed Keynes throughout the 1930s and thereafter. The principles of his business-cycle theory were never better illustrated than by real-world events during the run-up to the recession and financial crisis in 2007-2008 and the later stimulus, ZIRP and QE.

It seems amazing, but Keynesian economists today justify government policies by claiming that the alternative would have been worse and by claiming responsibility for anything good that happens. Actually, the real force at work was described by the Chairman of Great Britain’s Bank of England, Mervyn King, in the central bank’s February Inflation Report:

“We must recognize [sic], however, that there are limits to what can be achieved via general monetary stimulus – in any form – on its own. Monetary policy works, at least in part, by providing incentives to households and businesses to bring forward spending from the future to the present. But that reduces spending plans tomorrow. And when tomorrow arrives, an even larger stimulus is required to bring forward yet more spending from the future. As time passes, larger and larger doses of stimulus are required.”

King’s characterization of transferring spending or borrowing from the future accurately describes the effects of textbook Keynesian economics and the new variant spawned by the Bernanke Fed. Keynesians themselves advertised the advantage of fiscal policy as the fact that government spending spends 100% of every available dollar, while private consumers allow part of the same dollar to leak into savings. This dovetails exactly with King’s account. The artificially low interest rates created by monetary policy have the same effect of turning saving into current consumption.

Today, we are experiencing a grotesque, nightmarish version of Keynesian economics. Ordinarily, artificially low interest rates would stimulate excessive investment – or rather, would drive investment capital into longer-maturing projects that later prove unprofitable, like the flood of money directed toward housing and real-estate investment in the first decade of this century. But our current interest rates are so absurdly low, so palpably phony, that businesses are not about to be suckered by them. After all, nobody can predict when rates might shoot up and squelch the profitability of their investment. So corporations have pulled up their financial drawbridges behind balance sheets heavy with cash. Consumers have pulled consumption forward from the future, since that is the only attractive alternative to the stock investments that only recently wrecked their net worth. This, too, validates King’s conclusions. Whether “successful” or not, Keynesian economics cannot last because the policy of borrowing from the future is self-limiting and self-defeating.

Didn’t I just read that our budget deficit is headed lower? Doesn’t this mean that we’ve turned the corner of both our budget crisis and our flagging recovery?

If you read carefully, you discovered that the improvement in the federal government’s fiscal posture is temporary, mostly an accounting artifact that occurs every April. Another contributing factor is the income corporations distributed at year-end 2012 to avoid taxation at this year’s higher rates, which is now being taxed at the individual level. Most of this constitutes a one-time increase in revenue that will not carry over into subsequent quarters. Even though the real economic benefits of this are illusory, it does serve to explain why Fed Chairman Bernanke has picked this moment to announce an impending “tapering off” of the QE program of Fed bond purchases.

How so?

The fact that federal deficits will be temporarily lower means that the federal government will be selling fewer bonds to finance its deficit. This, in turn, means that the Fed will perforce be buying fewer bonds whether it wants to or not. Even if there might technically be enough bonds sold for the Fed to continue buying at its current $85 billion level, it would be inadvisable for the federal government to buy all, or virtually all, of an entire issue while leaving nothing for private investors. After all, U.S. government bonds are still the world’s leading fixed-income financial instrument.

Since the Fed is going to be forced to reduce QE anyway, this gives Bernanke and company the chance to gauge public reaction to their announcement and to the actual reduction. Eventually, the Fed is going to have to end QE, and the more accurately they can predict the reaction to this, the better they can judge when to do that. So the Fed is simply making a virtue out of necessity.

You said something awhile back that I can’t forget. You referred to the Keynesian policy of artificially lowering interest rates to stimulate investment as the “proximate” cause of the business cycle. Why is that true and what is the qualifier doing there?

To illustrate the meaning, consider the Great Recession that began in 2007. There were many “causes,” if one defines a cause as an event or sequence of events that initiated, reinforced or accelerated the course of the recession. The housing bubble and ensuing collapse in housing prices was prominent among these. That bubble itself had various causes, including the adoption of restrictive land-use policies by many state and local jurisdictions across America, imprudent federal-government policies promoting home-ownership by relaxing credit standards, bank-regulation standards that positioned mortgage-related securities as essentially riskless and the creation and subsidy of government-sponsored agencies like Fannie Mae and Freddie Mac that implemented unwise policies and distorted longstanding principles of home purchase and finance. Another contributor to recession was the decline in the exchange-value of the U.S. dollar that led to a sharp upward spike in (dollar-denominated) crude oil prices.

But the reign of artificially low interest rates that allowed widespread access to housing-related capital and distorted investment incentives on both the demand and production side of the market were the proximate cause of both the housing bubble and the recession. The interest rates were the most closely linked causal agent to the bubble and the recession would not have happened without the bubble. Not only that, the artificially low interest rates would have triggered a recession even without the other independent influences – albeit a much milder one. Another way to characterize the link between interest rates and the recession would be to say that the artificially low interest rates were both necessary and sufficient to produce the recession. The question is: Why?

For several centuries, an artificial lowering of interest rates accompanied an increase in the supply of money and/or credit. Prior to the 20th century, this was usually owing to increases in stocks of mined gold and/or silver, coupled with the metallic monetary standards then in use. Modern central banks have created credit while severing its linkage with government holdings of stocks of precious metals, thus imposing a regime of fiat money operating under the principles of fractional-reserve banking.

In both these institutional settings, the immediate reaction to the monetary change was lower interest rates. The effect was the same as if consumers had decided to save more money in order to consume less today and more in the future. The lower interest rates had complex effects on the total volume of investment because they affect investment through three different channels. The lower rate of discount and increased value of future investment flows greatly increase the attractiveness of some investments – namely, those in long-lived production processes where cash flows are realized in the relatively distant future. Housing is a classic example of one such process. Thus, a boom is created in the sector(s) to which resources are drawn by the low interest rates, like the one the U.S. enjoyed in the early 2000s.

The increase in employment and income in those sectors causes an increase in the demand for current consumption goods. This bids up prices of labor and raw materials, provided either that full employment has been reached or that those resources are specialized to their particular sectors. This tends to reduce investment in shorter-term production processes, including those that produce goods and services for current consumption. Moreover, the original investments are starting to run into trouble for three reasons: first, because their costs are unexpectedly increasing; second, because the consumer demand that would ordinarily have accompanied an increase in saving is absent because it was monetary expansion, not saving, that produced the fall in interest rates; and third, because interest rates return to their (higher) natural level, making it difficult to complete or support the original investments.

Only an increase in the rate of monetary expansion will allow original investments to be refinanced or validated by an artificial shot of consumer demand. That is what often happened throughout the 20th century – central banks frantically doubled down on their original monetary policy when its results started to go sour. Of course, this merely repeated the whole process over again and increased the size and number of failed investments. The eventual outcome was widespread unemployment and recession. That is the story of the recent housing bubble. This mushrooming disaster couldn’t happen without central banking, which explains why 19th century business cycles were less severe than many modern ones.

I don’t recall reading this rather complicated explanation before or hearing it discussed on television or radio. Why not?

The preceding theory of business cycles was developed by F. A. Hayek in the late 1920s, based on monetary theory developed by his mentor, Ludwig von Mises, and the interest-rate theory of the Swedish economist, Knut Wicksell. Hayek used it to predict the onset of what became the Great Depression in 1929. (Von Mises was even more emphatic, foreseeing a “great crash” in a letter to his wife and refusing a prestigious appointment in his native Vienna to avoid being tarred by exposure to events.) Hayek’s theory earned him an appointment to the London School of Economics in 1931. It was cited by the Nobel committee that awarded him the prize for economic science in 1974.

But after 1931, Hayek engaged several theoretical controversies with his fellow economists. The most famous of these was his long-running debate with John Maynard Keynes. One long-term consequence of that debate was the economics profession’s exile of capital theory from macroeconomics. They refused to contemplate the distinction between long-term and short-term production processes and capital goods. They treated capital as a homogeneous lump or mass rather than a delicate fabric of heterogeneous goods valued by an intricate structure of interest rates.

That is why Keynesian macroeconomics textbooks pretend that government can increase investment by creating money that lowers “the” interest rate. If government could really do this, of course, our lives would all be radically different than they actually are. We would not experience recessions and depressions.

Public-service radio and television advertisements warn consumers to beware of investment scams that promise returns that are “too good to be true.” “If it sounds too good to be true,” the ad declares sententiously, “it probably is.” What we really need is a commercial warning us to apply this principle to the claims of government and government policymakers – and, for that matter, university professors who are dependent upon government for their livelihood.

It turns out to be surprisingly difficult to refute the claims of Keynesian economics without resorting to the annoyingly complicated precepts of capital theory. Ten years before Keynes published his theory, the American economists Foster and Catchings developed a theory of government intervention that embodied most of Keynes’ ideas. They published their ideas in two books and challenged the world to refute them, even offering a sizable cash prize to any successful challenger. Many prominent economists tried and failed to win the prize. What is more, as Hayek himself acknowledged, their failure was deserved, for their analysis did not reveal the fallacies inherent in the authors’ work.

Hayek wrote a lengthy refutation that was later published under the title of “The ‘Paradox’ of Saving.” Today, over 80 years later, it remains probably the most meticulous explanation of why government cannot artificially create and preserve prosperity merely by manipulating monetary variables like the quantity of money and interest rates.

There is nothing wrong with Hayek’s analysis. The main problem with his work is that it is not fashionable. The public has been lied to so long and so convincingly that it can hardly grasp the truth. The idea that government can and should create wealth out of thin air is so alluring and so reassuring – and the idea of its impossibility so painful and troubling – that fantasy seems preferable to reality. Besides, large numbers of people now make their living by pretending that government can do the impossible. Nothing short of social collapse will force them to believe otherwise.

The economics profession obsessively studied and research Keynesian economics for over 40 years, so it has less excuse for its behavior nowadays. Keynes’ main contentions were refuted. Keynesianism was rejected by macroeconomists throughout the world. Even the head of the British Labor Party, James Callaghan, bitter denounced it in a famous speech in 1976. The Labor Party had used Keynesian economics as its key economic-policy tool during its installation of post-World War II socialism and nationalization in Great Britain, so Callaghan’s words should have driven a stake through Keynes’ heart forevermore.

Yet economists still found excuses to keep his doctrines alive. Instead of embracing Hayek, they developed “New Keynesian Economics” – which has nothing to do with the policies of Bernanke and Obama today. The advent of the financial crisis and the Great Recession brought the “return of the Master” (e.g., Keynes). This was apparently a default response by the economics profession. The Recession was not caused by free markets nor was it solved by Keynesian economics. Keynesian economics hadn’t got any better or wiser since its demise. so there was no reason for it to reemerge like a zombie in a George Romero movie. Apparently, economists were reacting viscerally in “we can’t just sit here doing nothing” mode – even though that’s exactly what they should have done.

If QE and ZIRP are not the answer to our current economic malaise, what is?

In order to solve a problem, you first have to stop making it worse. That means ending the monetary madness embodied in QE and ZIRP. Don’t try to keep interest rates as low as possible; let them find their natural level. This means allowing interest rates to be determined by the savings supplied by the private sector and the investment demand generated by private businesses.

In turn, this means that housing prices will be determined by markets, not by the artificial actions of the Fed. This will undoubtedly reverse recent price increases recorded in some markets. As the example of Japan shows only too well, there is no substitute for free-market prices in housing. Keeping a massive economy in a state of suspended animation for two decades is no substitute for a functioning price system.

The course taken by U.S. economic history in the 20th century shows that there is no living with a central bank. Sooner or later, even a central bank that starts out small and innocuous turns into a raging tiger with taxpayers riding its back and looking for a way to get off. (The Wall Street Journal‘s recent editorial “Bernanke Rides the Bull” seems to have misdirected the metaphor, since we are the ones riding the bull.) Instead of a Fed, we need free-market banks incapable of wangling bailouts from the government and a free market for money in which there are no compulsory requirements to accept government money and no barriers to entry by private firms anxious to supply reliable forms of money. Bit Coin is a promising development in this area.

What does all the talk about the Fed “unwinding” its actions refer to?

It refers to undoing previous actions; more specifically, to sales that cancel out previous purchases of U.S. Treasury bonds. The Fed has been buying government bonds in both primary and secondary bond markets pursuant to their QE and ZIRP policies, respectively. It now has massive quantities of those bonds on its balance sheet. Technically, that makes the Fed the world’s largest creditor of the U.S. government. (Since the Fed is owned by its member banks, the banks are really the owner/creditors.) That means that the Federal Reserve has monetized vast quantities of U.S. government debt.

There are two courses open to the Fed. One of them is hyperinflation, which is what will happen when the Fed stops buying, interest rates rise to normal levels and banks have no alternative but to use their reserves for normal, profit-oriented purposes that put money into circulation for spending. This has never before happened in peacetime in the U.S. The other is for the Fed to sell the bonds to the public, which will consist mostly of commercial banks. This will withdraw the money from circulation and end the threat of hyperinflation (assuming the Fed sterilizes it). But it will also drive bond prices into the ground, which means that interest rates will shoot skyward. This will create the aforementioned government budget/debt crisis of industrial strength – and the high interest rates won’t do much for the general business climate for awhile, either.

Since it is considered a public-relations sin for government to do anything that makes the general public uncomfortable and which can be directly traced to it, it is easy to see why the Fed doesn’t want to take any action at all. But doing nothing is not an option, either. Eventually, one of the two aforementioned scenarios will unfold, anyway, in spite of efforts to forestall them.

Uhhhh… That doesn’t sound good.

No spit, Spurlock. Yet, paradoxical as it might seem at first, either of these two scenarios will probably make people more receptive to solutions like free banking and free-market money – solutions that most people consider much too radical right now. There are times in life when things have to get worse before they can get better. Regrettably, this looks like one of those times.

DRI-293 for week of 2-17-13: The Man Who Created Today’s Telecommunications Marketplace

An Access Advertising EconBrief:

The Man Who Created Today’s Telecommunications Marketplace

Today we live in a world enveloped by telecommunications. I-phones and Smart-phones provide not only voice communications but data and Internet transmission as well. Cell phones are ubiquitous. Television stations number in the hundreds; their signals are received by consumers in direct broadcast, cable and satellite transmission form. Both radio and TV broadcasts can be streamed over the Internet. The Internet itself is accessible not only using a desktop computer but also via laptops, Wi-Fi and mobile devices.

For anyone below the age of forty, it strains the imagination to envision a world without this all-encompassing marketplace. Yet older inhabitants of the planet can recall a starkly primitive telecommunications habitat. In the United States – the most technologically advanced nation on Earth – there was one telephone company for almost all residents in 1970. There was one satellite transmission provider. In the wildly competitive corner of telecommunications – broadcast television – there were three fiercely competing networks.

How did we get from there to here in forty short years? And can we entertain an alternate scenario in which we might not have made the journey at all? The answers to these questions are chilling, for they open up the possibility that were it not for the efforts of one man, the great revolution in telecommunications might not have happened.

The man who created the telecommunications marketplace of today was Clay “Tom” Whitehead. The unfamiliarity of that name is an index of why we should study the unfolding of competition in the market for telecommunications. Before we introduce the leading character in that drama, we first set the scene by describing the terrain of the market in 1970 – and what shaped it.

The Economic Doctrine of Natural Monopoly

In 1970, American Telephone and Telegraph – the corporate descendant of the Bell Telephone Company founded by Alexander Graham Bell – was the monopoly telephone service provider for virtually all of America. The rationale for this arrangement was provided by the doctrine of natural monopoly.

A natural monopoly was said to exist when a single firm was the most efficient supplier for the entire market. This was caused by the unique cost structure of that market, in which the average cost of production decreased as output increased. It is vital to visualize this as a static condition, not a dynamic one; it is not dependent on a succession of technological innovations of the sort for which Bell’s scientists were renowned. If Bell Labs had never developed a single invention, in other words, the company’s status as a natural monopoly would not have changed.

If decreasing average cost was not due to innovation, what did cause it? The most plausible explanation came from engineering. The 2/3 Rule related the productivity of transmission through a pipe or transportation via a container to its cost. But since its cost increased as the square of surface area while its productivity or throughput increased as the cube of its volume, the average cost or ratio of total cost to total output continually fell as output increased because productivity (in the denominator) increased faster than cost (in the numerator).

Continually falling average cost meant that one firm could constantly lower its price while producing ever more output, while still covering all its costs. This would enable it to underprice and force out any and all competitors. Since monopoly was the eventual fate of the industry anyway, better to relax and enjoy it by declaring a monopolist while striving to mitigate the monopoly outcome.

In America, the mitigation was accomplished by profit regulation. The natural monopoly firm was allowed to earn a “normal” rate of return, sufficient to attract capital to the industry, but no higher. That normal rate of profit was identified by the public utility commission (PUC) based on hearings at which the company, regulators and various interest groups (notably regulators supposedly representing consumers) testified.

When outlined in textbooks and classrooms, this concept sounded surprisingly reasonable. When put into practice, though, it was a mess.

Perhaps the worst feature of PUC-regulation of so-called natural monopoly was the increasing chumminess between commissions and the monopoly firms they oversaw. This sounds like an accusation of collusion, but in reality is was the inevitable by-product of the system. Commissions lacked the technical expertise to regulate a high-tech business. While they possessed both the right and the ability to hire consultants to advise them, trouble and expense relegated this to rate-case hearings at which the profits and rates charged by the company were reviewed. On a day-to-day basis, the commission was forced to cooperate with and rely on the company’s employees to guarantee that the utility’s customers were served.

After all, the firm was a genuine, honest-to-goodness monopoly – not a phony, pseudo-monopoly like the oil companies, which faced scads of competition and any one of whose customers had lots of competitive alternatives to turn to. The oil companies were monopolies only for purposes of political theater, when politicians needed a scapegoat for their foolish energy policies. But if a public utility were threatened with insolvency or operational failure, then the lights might go out or the phones go dead for an entire city, metro area or region. So the PUC was regulating and utility and protecting it at the same time.

Regulation was probably an impossible task anyway, but this ambiguity made things hopeless. The result was that PUCs erred on the side of excessive rates of return and compliance with company wishes. Since high profits were out of the question anyway, public-utility executives took their “excess profits” in the form of perquisites and a quiet life, free from the stresses and strains of ordinary business. Public utilities became noted for lavish facilities, huge administrative budgets and large staffs – in the vernacular of the industry, this was called “gold-plating the rate base.” (The rate base was the agreed-upon list of expenses and investment the company was allowed to recover in rates charged to customers and upon which its rate of return was earned.)

Ordinary businesses feel constant pressure to hold down costs in order to maximize profit; cost-minimization is what helps insure that scarce economic resources are used efficiently to produce output. But public utilities were assured of their profit and coddled by regulators; thus, they faced no pressure to reduce costs or innovate. Indeed, the reverse was true – a cost innovation would theoretically call for new rate hearings to reduce the utility’s rates, since otherwise it would exceed its regulatory allowance of profit. Economists were so fed up with the sluggish pace of technological progress among public utilities in general, and the Bell system in particular, that most viewed the phenomenon of “regulatory lag” as a good thing. It was worth it, they reasoned, for the utility’s profits to exceed its limit in the short run as an inducement to effect cost reductions that would achieve long-run efficiency.

It would seem that PUCs would have faced public criticism for failure to hold down public-utility profits, since that was their primary raison d’être. Commissions sought to inoculate themselves from this criticism by a policy of offering artificially low prices to residential customers of public utilities. Since they had to raise enough total revenue to meet all utility costs plus an allowance for a fat profit, this subsidy to residential customers had to be recouped somewhere. In practice, it was regained by socking business users with onerous rates. The Bell phone companies, for example, charged notoriously high rates to business users of telephone service.

Commissions trotted out a legal rationale for this policy of price discrimination in favor of residential users and against business users. The policy furthered the goal of universal service, claimed commissioners proudly. Because public-utility products were goods like telephone service, electric power and gas service, commissions could plausibly depict them as necessary to public health and safety. Consequently, they justified subsidies to residential users by maintaining the necessity of assuring service to all, regardless of income, on the basis of need.

Of course, the economic logic behind the policy of universal service was non-existent. High rates levied on businesses were not paid by non-human entities called “businesses.” No business ever paid anything in the true economic sense because payment implies a sacrifice of alternative consumption and the utility or happiness delivered by it. Since a business cannot experience happiness – or lose it – a business cannot pay for anything. Those high business rates for phone service, for example, were paid in the long run by consumers of the business’s output in the form of higher prices and by suppliers of inputs to the business in the form of lower remuneration. But to the extent that the public were deceived by the rhetoric of the commission, they may have approved the wasteful doctrine of universal service. This is ironic, for the Bell system never succeeded in increasing the percentage of household subscriptions to phone service to the level of the percentage of households owning a television set. So much for the absolute necessity of telephone ownership!

Meanwhile, public utilities became public menaces when they spotted businesses threatening their turf. Cellular telephone technology was technically feasible as long ago as 1946 (!), but the Bell companies weren’t interested in developing it because they already had a highly profitable and completely secure fiefdom based on landline technology. And they weren’t about to stand idly by while other businesses moved in on their markets! Consequently, applicants for licenses to operate mobile phone businesses were either denied or hamstrung by red tape.

In 1956, the Justice Department was sufficiently fed up with Bell’s antics to launch an antitrust suit against the Bell system. In a sense, this was inherently contradictory since government had granted the monopolies under which the Bell companies operated. But Justice accurately realized that something had to be done to break up the cozy arrangement between Bell and the state and local politicians whose regulation was in fact serving as the barrier to competition in products ancillary to Bell’s landline phone service. It is one measure of the political influence wielded by the Bell empire that this lawsuit proved abortive and was dropped without result.

Another indicator of Bell’s power was the fact that the Bell companies annually issued more debt than did the federal government itself. When the federal antitrust action was revived in 1974, then-Secretary of State George Schultz (formerly a well-known labor economist at the free-market oriented University of Chicago) reminded prosecutors of this fact and advised that the antitrust suit be quashed for fear of “roiling the bond markets” prior to an upcoming bond issue by the U.S. Treasury. This advisory outraged a relatively obscure White House official at the Office of Technology Policy.

Tom Whitehead and the “Open Skies” Policy

In 1970, Clay T. “Tom” Whitehead was a young (32) graduate engineer whose life had taken a detour when he was introduced to economics. He followed up his Master’s in electrical engineering at MIT with a PhD in economics there, studying under noted scholar, theorist and consultant Paul MacEvoy. When the Nixon administration inaugurated the position of White House Office of Telecommunications Policy, Whitehead’s academic credentials and connection to MacEvoy earned him the post of Director. President Nixon viewed the subject of economics with ill-concealed disdain; his aides envisioned the job as a way of grabbing countervailing policymaking power away from the permanent regulatory bureaucracy that controlled the federal government and was dominated by Democrat appointees. Little did they know what kind of policymaker they were getting.

The moon landing in 1969 had achieved the objective of NASA’s space program, which was left with no immediate goal in sight. The Vietnam War had become a fiscal burden as well as a political one, and there was talk of enlisting the private sector to carry some of the financial freight by sponsoring a communications satellite. Up to that point, the satellite program (COMSAT) had been a de facto joint creature of the federal government and AT&T. NASA produced the satellites, the best-known being Telstar. AT&T owned a plurality of the stock shares and seats on the board of directors.

The chairman of the Federal Communications Commission (FCC), a Republican, drafted a proposal for a fully privatized company. It was to be a joint monopoly to be shared by NBC, ABC, CBS, RCA, GTE (a Bell company) and AT&T. The presumption was that satellite communications was a natural monopoly like all other forms of communications – television and radio networks, telephone and telegraph. There was no point in promoting a competitive process that was bound to culminate in a monopoly.

Tom Whitehead begged to differ. He put forward a radically different proposal called the “Open Skies” policy. There was plenty of room in space for many satellites owned by many different private companies, each serving their own interests and customers. There was plenty of bandwidth available for satellites utilize in receiving signals and transmitting them back to Earth. All that was necessary was to adjust orbits and frequencies to preclude collisions and confusion – something that all parties had an interest in doing.

Practically everybody thought Whitehead was crazy. The ones who didn’t doubt him feared him because he threatened their economic or political predominance. But he had the backing of the White House, not for ideological reasons but because he opposed the Establishment, which hated Richard Nixon. And he won his point.

One by one, private firms began sending up communications satellites into space. First came Western Union in 1974. Then came RCA in 1975, followed by Hughes and GTE. The first half-dozen were the pioneers. Eventually, the trickle became a deluge. And the modern age of telecommunications was born.

Privatization of satellite communications also stimulated competition in, and with, cable television. Cable TV had previously been strictly a local phenomenon, tied to AT&T by the need to lease coaxial cable facilities and rights of way. Whitehead approved FCC 1972 policy proposing to loosen federal regulations on cable. In 1974, he chaired a committee whose report advocated federal deregulation of cable. This freed the industry to lease and own satellites and take its product national. Satellite communications allowed competing cable providers uplink popular local and regional stations’ programming to satellite for national distribution. Later, satellite TV emerged as a leading competitor to cable TV, providing more channels, better reception and fewer problems.

More recently, satellite radio and TV have developed their own competitive niches. Satellites have become the transmission media of choice for telecommunications, establishing a transmission position of advantage from which signals could be sent throughout the planet. This revolution was the brainchild of Tom Whitehead.

Tom Whitehead and the Breakup of AT&T

Tom Whitehead did not initiate the antitrust suit against AT&T, nor was he directly involved in prosecuting it. But he was a powerful influence behind it nonetheless.

His staff at OTP had independently reached the conclusion that the political power and economic inertia of the Bell system formed an insuperable obstacle to competition in telecommunications. When he urged them to approach the Department of Justice about reactivating its 1956 suit against Bell, they learned that DOJ was moving in that direction already.

Had the White House opposed this initiative, it would have stalled out like its predecessor. The Department of Defense claimed that the lawsuit was a threat to national security because the Bell system was a vital cog in the national defense. (Among other things, AT&T worked closely with DOD, the Pentagon and the FBI on civil defense, counter-espionage and domestic military exercises.) As noted above, AT&T even wielded financial clout in government circles because its capital-intensive production methods made it even more heavily reliant on debt finance than the federal government itself.

But Whitehead was adamantly in favor of the action. The American public complained about the absurdity of fixing a phone system that wasn’t broke and compared the suit to a parallel action against IBM. In fact, the two had nothing in common, since IBM wasn’t a monopoly while AT&T was a monopoly in the old-time, classical sense – it was not only a single seller of a good with no close substitutes, but entry into its market was legally barred by the government itself.

The regional Bell companies resisted the breakup tenaciously and still to this day continue to fight harder against competition than they do commercially against their competitive rivals. After all, they were created as creatures of regulation, not competition, and don’t really know how to behave in a competitive market.

The result speaks for itself. Today, Americans have decisively rejected landline telephone service and embraced the new world of wireless and digitized telecommunications. They can obtain phone service via cell phones or more sophisticated mobile devices that perform multiple functions. They can combine phone service with data processing functions over the Internet. The last vestiges of the old monopoly remain standing alongside the dying Post Office in the form of mandatory service provided to remote and rural areas. Today, even the staunchest defenders of regulation and the old status quo cannot deny that Whitehead was the visionary and that they were the reactionaries.

Whitehead’s Subsequent Career

After leaving OTP in 1974, Tom Whitehead went first to a subdivision of Hughes Communications, where he started a private cable division. He thus became instrumental in what later became the development of satellite TV. Then he fomented his next revolution by moving to Luxembourg (!), where he started SES Astra, a satellite company that pioneering private television broadcasting in Europe. Before Whitehead, Europe had no private television broadcasters; they were all state-owned.

Luxembourg was chosen because its miniscule size allowed Whitehead and company to chainsaw their way through its government bureaucracy relatively quickly. The nature of their opposition can be gauged by the fact that they faced their first lawsuit within 20 minutes of receiving their incorporation papers. Today, the company Whitehead founded is the world’s second-largest satellite provider, riding herd on more than 50 satellites that serve over 120 million customers.

After retiring, Whitehead taught at GeorgeMasonUniversity where he hosted the world’s leading figures in telecommunications at his seminar. He died in 2008. This year, the Library of Congress received his papers. The American Enterprise Institute commemorated the occasion by organizing a symposium of his friends and co-workers to highlight his role in shaping the world we inhabit.

The Economic Significance of Tom Whitehead

Tom Whitehead’s life starkly defines the importance of individuals to history and human welfare. Only a tiny handful of other human beings on the planet might have occupied his position and achieved the outcomes he did. And without those outcomes, the world would be a vastly different – and far worse – place.

Tom Whitehead was fought tooth and claw by the forces of government regulation. (The historical chain of coincidence that lined up DOJ against AT&T will be the subject of a future EconBrief.) This illustrates the fact that government regulation of business is not a useful supplement to marketplace competition, but rather an inferior substitute for it. The purported aims of regulators are in fact precisely the outcomes toward which competitive markets gravitate. If regulators knew better than businesspeople and consumers how to produce, sell and select appropriate numbers and kinds of goods and services, they would work in the private sector rather than in government. Their position in government places them poorly to run companies or industries, or to impose their will on consumers. In this case, if regulators had their way, we would still occupy the telecommunications equivalent of the Stone Age.

Whitehead’s life illustrated the difference between technological progress and economic progress. Communications satellites became technically possible in the late 1950s; cell phones in the mid-1940s; cable TV in the 1930s. But these did not become economically feasible until the 1970s. And economic feasibility, not technical or engineering feasibility, determines value to humanity.

Economic feasibility requires demand – a use must be found that delivers value to consumers. It requires supply – the technically-feasible product or process must be produced and sold at a sacrifice of alternative output that consumers can accept. Last, but not to be overlooked, the technically feasible product or process must be politically tolerated. Incredible as it might seem, this last hurdle is often the highest.

Tom Whitehead played a direct role in meeting two of these requirements for telecommunications and indirectly allowed the third to be met. He created the telecommunications market we enjoy today as surely as did Edison, Tesla and the technological pioneers of the past.

His name should not languish in obscurity.

DRI-309 for week of 10-21-12: The Economic Logic of Gifts

An Access Advertising EconBrief:

The Economic Logic of Gifts

The approach of the year-end holidays releases a flood of gift-oriented online content. One such article appeared on MSN on October 19. “What Women Want Men Don’t Give,” by Emily Jane Fox, seized the publication of research by American Express and the Harrison Group as an opportunity for male bashing. The full findings, though, don’t provide ammunition to either side in the battle of the sexes. But they do supply grist to the mill of economists.

Economists study the practice of gift-giving carefully. This surprises most people, who view gifts and pecuniary purchases as antithetical behavior. Yet in both theory and practice, gifts loom large in economics.

Most laymen are aware that the bulk of retail sales are expended on holiday gifts during the Christmas season. But they are probably unaware that economists are even more interested in the individual motivations for giving than in their seasonal macroeconomic impact. Senator Jed Sessions recently identified nearly 80 separate federal welfare programs that dispense around $1 trillion annually. Whether cash or in-kind, these disbursements are gifts in the technical sense. War reparations demanded by victor nations from the vanquished have sometimes changed the course of history, the most famous example being the steep debts levied on Weimar Germany by the Allies after World War I. Such reparations are yet another form of gift, this time on the national level. Their effects are proverbial among students of international economics.

The MSN article is best understood as an exercise in the modern practice of rhetorical journalism. That is, it is intended not to report facts but to produce an effect on the reader. Economists study unintended consequences of human action, and in this case the author’s attempt to manipulate her readers has produced surprising revelations about the economic logic of gift-giving.

The Process Frustrates Everybody – Including the Author

The research, sponsored by American Express and a consulting firm called the Harrison Group, surveyed 625 households whose working members ranked in the top 10% of wage-earners by income. The survey questions probed respondents’ preferences in both gift-giving and receiving. The results found “…a wide gap between what people want and what they actually get.” Apparently, the gap occurs because people do not give gifts in accordance with recipients’ wishes. Indeed, they do not even behave the way they themselves want their own gift-givers to act.

The author showcases women as victims of this asymmetry. “Two-thirds of affluent American women want gift cards. But less than a fifth of men will [comply]…Instead, 70% of American women are gifted clothing or jewelry.”

Thus, the article begins in a familiar manner. The reader is presented with stereotypes – woman as practical consumers, men as selfish dinosaurs who persist in their ways heedless of feminine sensitivities. But just when the reader feels able to predict what is coming, the article changes course.

Men, too, suffer the pain of asking without receiving. Men “…want food and alcohol – a third are hoping for gourmet foods and fine wine, and another third want gift cards. But women like to give none of these – 30% are expected to give clothing and another 15% books… What wealthy shoppers are better suited for [is] giving gifts to themselves.”

The author’s frustration seems comparable to that of her subjects. Having begun with one agenda in mind – to reinforce the stereotype of male insensitivity – she runs up against the comparable worthlessness of female behavior. When her gender angle encounters a roadblock, she makes a last-minute detour in the direction of class envy by indicting the self-absorption of “wealthy shoppers.” But she lacks the space to start up another argument and must rest content with allowing the headline to do all her work.

What Do Women Want, Generically Speaking? Does This Differ From Male Wants?

Economists try to interpret facts in light of what they know – or think they know – about human motivation. Can we take the scenario as presented above and make sense of it?

“What do women want?” is an age-old question. Economics does not recognize separate male and female systems of logic, so the apparent frustration felt by the author does not affect them. The fact that men and women behave broadly alike is not shocking. The question is: Is their behavior logically consistent?

Some commenters on the online article showed disdain for the author’s willingness to question the value of a gift. Why not accept it in the (presumably charitable) spirit in which it was offered? That is a question worth tackling.

The exchange of gifts is a ritual dating back many centuries. The distinctive features of holiday gift exchange in the Western world are its reciprocal and quasi-compulsory character. Reciprocity implies that, roughly speaking, the net monetary value of the exchanges can be treated as cancelling out. Consequently, their only real value must be to achieve some sort of efficiency. Otherwise, why bother? When it comes to random gifts, the commenters have a point. The mouth of a non-reciprocal, fully voluntary gift horse is certainly not worth close examination.

There is much to criticize about the holiday gift-giving ritual, though. The custom of giving gifts in-kind runs afoul of the long-recognized economic presumption in favor of gifts in cash. Students of intermediate microeconomics courses are routinely shown the inefficiency of programs like the federal food-stamp program, which subsidizes the consumption of (somewhat) poor people by giving them subsidized food rather than a cash payment of equal value. (Technically, the term “equal value” must refer to the value of the subsidized good – food – that the recipient chooses, which can only be determined after the consumption choice is made on pre-selected terms. But it is easy to show diagrammatically or mathematically that giving the recipient an amount of cash equal to the value of the food they choose could never make them worse off and would probably make them better off.)

The inherent logic behind the demonstration is quite straightforward. The gift confers an increment of real income upon the recipient. An addition to real income creates a willingness to consume a larger amount and/or higher frequency of all normal goods, not merely more of one specific good. Hence, the new optimal basket of consumption goods will include increases in more than just one good or service. Receipt of real income in the form of cash allows maximum scope for distributing the increase among the different possible choices.

Why doesn’t this same logic apply to gifts? The short answer is that it does. Economists have devised various ad hoc explanations to rationalize the practice of in-kind gift giving, but none of them really satisfies. That is why these research results are so unsurprising. Women prefer receipt of gift cards to clothing and jewelry. If the gift cards are issued by specialty stores, they allow the recipient a wider range of choice among the styles, brands and sizes of clothing and jewelry. If the cards are to department stores, they allow even wider branching out to other types of goods and services. There is even a legal market for the exchange of gift cards for cash (at a discount), just as food-stamp recipients once traded stamps for cash illegally at discounts up to 50%.

Gift cards are also among the preferred options of men, but men display more willingness to delegate the shopping for their preferred choices of gourmet foods and liquor. This is probably owing to the traditional division of labor, in which women shop for and prepare food but men often purchase liquor. This is the rare case in which you will be willing to let somebody else make your consumption choices for you – they are an expert and you are not (or may not be).

In the light of this, the oft-expressed nostalgia for the Christmas of childhood is understandable. Children are typically net beneficiaries of the gift-giving ritual, with their gift exports being outweighed in number and value by their imports. This favorable holiday balance of payments casts a rosy glow over the holidays that gradually dims in intensity as increasing export responsibilities accompany the aging process.

Note that there is a role for gender in these research results, all right – just not the invidious one implied by the article’s headline. Another way to consider this matter is to ask whether women’s responses would differ markedly if the gift-giver were another woman, as opposed to a man. (Later, we will consider the significance of the degree of intimacy between giver and recipient.) Assuming the answer is no – and the article made no reference to any such distinction in this research – then the economic logic above is sound.

Modifying for the division of labor, the research results show both men and women displaying the basic utility-maximizing, economic preference for cash or cash substitutes rather than narrow in-kind gifts. What are we to make of the apparent fact that both sexes appear “better suited for giving gifts to themselves?”

Utility Maximization and Selfishness

The article’s author is apparently affronted by the possibility that some of us are better suited for giving gifts to ourselves than to others and she wants us to feel her outrage. She probably likes her chances because her target – the top 10% of wage earners – is a loose proxy for “the wealthy,” who are under assault from many sides these days. The overriding sin committed by the wealthy is alleged to be “greed” or “selfishness.” This has often been likened to the behavioral assumption underlying the economic theory of consumer demand, which is utility maximization. We assume that people try to become as happy as possible.

The equation of utility maximization with selfishness simply won’t wash. For one thing, utility maximization doesn’t say anything one way or the other about other people because the individual’s utility function is assumed to be independent of the consumption of other people. “Independent” means just that. It doesn’t mean that we set out to hurt other people or to studiously ignore them. It just means that our overriding goal is our own happiness.

And in fact it could hardly be any other way. The reality of our internal and external worlds dictates it.

Each of us instinctively recognizes the difficulty of ever really knowing another person as we know ourselves. The closest most of us ever come is through the institution of marriage, yet nearly half of U.S. marriages end in divorce. The same basic conflicts that drive couples apart also militate against optimal gift-giving – budgetary disagreements, differences in tastes and preferences, in maturity and temperament, in perception and grasp of reality.

Looking out for number one has been our evolutionary priority since day one. But ever since man began congregating in groups, an ethic of sacrificing individual wants to the needs of the group was promulgated. This ethic had survival value for the group, although it tended to be hard on particular individuals. And, over time, group leaders became adept at suspending the rules in their own case.

Meanwhile, mankind slowly developed a market process for increasing wealth and happiness. This market process ran counter to the group ethic because it increasingly demanded cooperation with individuals outside the group – indeed, cooperation between individuals who never met or even suspected that they were cooperating. This extended order of cooperation was one of the market’s greatest strengths, since it prevented political, religious or cultural differences from interfering with the growth of wealth and real income.

When the social order consisted of mated pairs living in caves, it was not unreasonable for one mate to control the consumption pattern of the pair. The choices were so few and so starkly simple, the human species so primitive that one could envision coming to anticipate the wants and desires of a spouse to a high degree. Today’s sophisticated world with tens of thousands of consumption choices made by evolved human brains makes nonsense of that concept. Even spouses cannot be expected to read each other’s minds well enough to reach the apex of consumption choice.

In this context, it is worthwhile to observe that women are apparently the ones who pretend to this level of expertise. They refuse to delegate their clothing and jewelry purchases but are more than willing to overrule mens’ consumption choices, to the point of substituting clothes for food and liquor in their “gifting.” (Why not “giving,” by the way?) We accuse government bureaucrats of paternalism, but it would appear that this should instead be maternalism.

It is luminously clear that there can be only one true expert on your consumption, and that is you. Nobody else in the world could begin to accumulate the objective information on the thousands of potential goods that you can or might consume, or the subjective data on your particular tastes, preferences and attitudes towards them. It is sobering to realize that not even a life mate can approach the degree of familiarity needed to truly run your life for you.

Small wonder, then, that “nearly half of women” in the survey “said they were extremely, or very likely to buy themselves presents this holiday season. A third of men have similar intentions.”

It is idiotic to call this behavior selfishness when it merely acknowledges the practical facts of life. We need a vocabulary to describe an inordinate preoccupation with self – the kind displayed by thieves, murderers, embezzlers and the like – and this is the proper preserve of words like “greedy” and “selfish.”

The Theory of Gifts and Public Policy

The economic logic of gifts has important implications for public policy. For over four decades, various researchers have estimated the amount of welfare expenditures necessary to lift every man, woman and child above the so-called poverty line. Then they have compared this irreducible necessary minimum expenditure on fighting poverty with the amount actually spent by federal welfare programs. The ratio between what we spend and what we would theoretically need to spend has fluctuating over time. It has been as low as two and as high as ten. Currently, according to the latest estimate, it is about five.

We are ostensibly trying to eliminate poverty. We are currently spending five times more on indirect ways of doing this than we would need to spend if we simply gave cash directly to poor recipients. And we are failing to achieve the stated objective of eliminating poverty, since even if the value of cash and in-kind subsidies is added to income there are still a substantial number of people living below the poverty line. We know that cash subsidies are more effective at increasing the happiness of recipients than the in-kind subsidies, such as food stamps, in which the federal government specializes.

So why are we still pursuing a horribly wasteful and inefficient policy of fighting poverty and failing instead of implementing a simpler, much cheaper and more efficient policy that will succeed?

Put this way, the answer stands out. It is reinforced by the experience of any classroom teacher who ever explored the issue. In droves, students insist that we cannot afford to give cash to welfare recipients because they will spend the money in unsuitable ways; e.g., ways that the students do not approve of. Expenditure on illicit, mind-altering drugs is the example most often chosen to illustrate the point.

Students persist in this view even after the irrefutable demonstration that current in-kind forms of welfare, such as food stamps in both its former and present incarnations, also allow recipients to increase their expenditure on “other goods” besides the subsidized good. (In-kind subsidies hinder the flexibility of recipients but allow them to buy the same amount of the subsidized good as before with less money, thereby freeing up more regular income for use in buying drugs or other contraband.)

Thus, it is clear that the actual rationale behind the government welfare system is not to improve the welfare of recipients by maximizing their utility. Instead, it is to maximize the utility of taxpayers by allowing them to control the lives of recipients while assuaging their own guilt. Taxpayers are responding to the vestigial evolutionary call of the group ethic that demands individual sacrifice for group preservation, while meeting their own need for utility maximization. They are countenancing interference in the lives of the poor that they would never sit still for in their own lives, and which they resist even in areas as relatively trivial as holiday gift-giving.

Economists look for ways to make everybody better off without making anybody worse off. Eliminating the federal welfare system would end an enormously wasteful and unproductive practice. Research shows that private charity is highly active and more efficient than federal efforts even though substantial taxpayer income is now diverted into federal anti-poverty efforts. If federal programs were ended, more funds would become available for private charitable purposes. Recipients could choose the degree of maternalism they found tolerable and donors could demand or reject maternalism, as they saw fit.

Meanwhile, resources would be freed up at the federal level to produce other things. Dislocations among employees due to agency closures would be no different than layoffs in the private sector due to shifts in consumer demand between different goods and services. Obviously, some federal employees would migrate to private-sector charities, where employment would rise.

Another extension of these principles applies to recent attempts by federal bureaucrats to fine-tune the pattern of consumption by banning or requiring the consumption of particular foods, minerals, vitamins, fats or other substances. In principle, a case might be made for provision of information allowing informed choice by consumers. The problem is that even here, the federal government’s past efforts have worsened the very problems it now purports to solve. But there is no case in favor of allowing government to dictate consumption choices made by citizens because government cannot possibly possess the comprehensive information necessary to verify whether its actions will improve or worsen the welfare of its subjects.

The Economics of Gifts

The research results reported in the MSN article may have frustrated its author, but they are consistent with the economic principle of utility maximization – properly understood. People request the kind and general form of gifts that tend to maximize their utility, but they take exactly the same tack when it comes to giving gifts to others – they tend to maximize their own utility, not that of the recipient. We can fume, fuss, moralize and complain about this behavior, but it is the only practical way to behave. Practical human limitations dictate it.

And when it comes to public policy, it is utterly futile to expect altruism and omniscience to suddenly triumph in an arena where they are even less potent than they are in private life. Private charity has its limitations, but it is best situated to cope with the inherent difficulties involved when one human being tries to help another.

DRI-345 for week of 9-9-12: Other People’s Money

An Access Advertising EconBrief:

 

Other People’s Money

The elephant in the room in any political discussion is the ongoing debt crisis in Europe and the impending one in the U.S. Turn over the debt coin to reveal spending; the two go together like dissipation and death.

We strive to understand the complex and unfamiliar by likening it to the familiar. It is commonplace to read explications of government spending and debt that treat the government as one great big corporation or, worse, as the head of our national household. In fact, it is just those differences between the behavior of government and our own daily lives that give rise to misunderstanding.

In their immortal bestselling text Free to Choose (companion piece to a hit 1980 PBS series), Milton and Rose Friedman developed a beautifully concise matrix to illustrate the differences between government and private spending. Herein lies the key to the avalanche of debt poised to engulf the world.

The Spending Matrix

In a modern economy, money serves as a lubricant to the exchange of goods and services between individuals and businesses. The income received by households for supplying input services to businesses and government forms the basis for expenditure on consumption goods and services. Income not consumed is saved and invested; an excess of current consumption spending over income constitutes dissaving and is financed by borrowing to incur debt. Government “income” is derived from tax revenue and expenditure is undertaken to provide consumption and investment benefits to citizens. Once again, any excess of expenditure over income must be financed, either by money creation or borrowing to incur debt.

The Friedmans explain why the efficiency of the expenditure process depends crucially on the origin of the money being spent and the identity of the spending beneficiaries. First, they identify the four basic categories of spending. Money is the vehicle for spending. Either the money is yours (originating via income you earned) or it is supplied by somebody else via the intermediation of government, which levies taxes and gives you the proceeds. Either you are spending the money on yourself or you are spending it on somebody else. The possibilities reduce to four spending categories.

Category I denotes the case in which you are spending your own money on yourself. In this situation, spending is at its most efficient. The word “efficient” has two everyday meanings, both of which are germane here. First, you spend your own money efficiently because you have the strongest possible incentive not to spend any more than necessary for a given quantity and quality of good or service. This is so because there is nobody whose welfare means more to you than yours. (For our purposes, we can stipulate the content of “you” to include members of your household.) Second, you want to get the most value for your expenditure – that is, for a given expenditure you want to get the best quality and most appropriate items. Again, this makes sense because nobody means more to you than you.

In Category II spending, you are spending your money on somebody else. Your spending will be efficient in the first sense – you will still strive to minimize the cost of a given quality of purchases – but not the second sense. You are not obsessively concerned with value-maximization because you yourself are not consuming the goods your purchase – somebody else is. Any doubt about the truth of this observation will yield to a study of the yearly gift-return statistics during the Christmas season.

In Category III spending, you are spending somebody else’s money on yourself. Now you will strive to get the best possible value for your money, but you will not be rigorously concerned with cost-minimization because you are not spending your own money – you are spending somebody else’s money. Your utility or satisfaction depends on the goods and services you consume, so you have every incentive to acquire goods and maximize their value, but your utility in unaffected by the efficiency with which other people’s money is spent. Thus, you have no incentive to waste time worrying about it.

Category IV spending is the kind undertaken by government. Legislators spend somebody else’s money on somebody else. Consequently, they have no incentive to spend efficiently in either sense. They are not spending their own money and they do not themselves benefit from the expenditures, so their consumption is not dependent on the expenditures. Thus, legislators do not minimize the cost of purchasing a given quality of goods using somebody else’s money, nor to they maximize the value of the goods they purchase for others to consume.

Government Spending

During the 20th century, political economy saw a worldwide trend toward increase in the size and activity of government. The duties of government came to include not merely tasks that private business and individuals were unable to perform, such as national defense, but activities that had heretofore been confined to the private sector, such as the provision and regulation of medical care.

Proponents of bigger government hailed this trend while devotees of limited government deplored it. In terms of our model of spending, the substitution of government for the private sector means a change in spending category and in the relative efficiency with which money is spent. Evaluating this change is one of the best ways of deciding whether more and bigger government is good or bad.

Most government spending is Category IV spending. Legislators appropriate large sums of money from the Treasury and spend it for the benefit of large groups of people or the nation at large. Sometimes the money being spent has a traceable relationship to money raised from the public; sometimes it does not. Sometimes the legislators actually contribute to the funds from which the spending is drawn; sometimes they don’t. (For years, Federal employees were exempt from Social Security and had their own retirement plan. Likewise, state employees hired before 1986 do not contribute to Medicare.) But an individual legislator’s percentage share of the spending and benefits is so minute as to be imperceptible; other incentives swamp the cost and value considerations cited above for legislators.

Category IV spending is the least efficient kind of spending in both senses of the word. It is also the kind of spending most conducive to fraud. Fraud is generally thought of as “deceit” or “trickery,” but its legal definition requires that the perpetrator lacked any intention of performing or providing the contracted-for good or service. Intentions are best gauged and fulfilled by their possessor; by definition, one cannot defraud oneself legally, however self-deceptive one’s actions may be psychologically. Thus, Category I spending is proof against fraud. Category II and III spending has at least the safeguard that you are vetting one end of the transaction, although this is not absolute proof against fraud. But Category IV spending is an open invitation to fraud, since nobody has a direct interest in efficient spending on either end of the transaction.

A Case Study in Government Overspending: Medicare

The Medicare program is a classic case of inefficient government spending in general and an invitation to fraud in particular. Medicare’s general inefficiency lies in its Category IV status. The recipients of the Medicare program are (as a first approximation) elderly Americans. But program expenditures are ultimately determined by government, which approves covered procedures and global budgets. Efficient spending requires patients to view doctor visits, tests and medical procedures as expenses, buying them only when their prospective value outweighs their cost. Doctors should aid patients in determining prospective benefits. Instead, the program grossly distorts the true economic costs of medical treatment by understating them. Doctors have no incentive to seek least-cost treatment regimes since they know that patients pay only a relatively small ($140) deductible and 20% of subsequent treatment costs. Patients have little incentive to minimize costs since they pay so little at the margin for additional treatment. This alone is a formula for overspending – which is just what has happened around the world, forcing most countries to ration medical treatment inefficiently by queue and government fiat rather than efficiently through the price system.

Total Medicare expenditures exceed $500 billion annually. Fraud detections of just under $50 billion are probably underestimates, but nobody knows the true extent of Medicare fraud. One of the most prevalent forms is billing fraud, in which providers bill the government for services not performed. In these cases, the government is spending taxpayers’ money for the ostensible benefit of patients but the actual benefit of providers. Since patients do not pay the bill, they have no incentive to detect or object to the fraudulent payments. Sometimes fraudsters will include patients in the scheme, in order to reduce the likelihood of detection. Since patients are not paying the bill, they do not lose from undetected fraud but do gain from kickbacks.

What about the fact that Medicare recipients are also (often still) taxpayers? Since no action taken by Medicare recipients can affect taxes already collected from taxpayers, recipients quite correctly view those taxes are sunk costs. They ignore them and abuse the system just as much as any non-taxpayer. And their behavior is economically rational.

The Limitations of Government

Why are you a more efficient spender of your money than government? The Friedmans accurately pinpointed one key reason: incentives. You have the strongest incentive to achieve both kinds of efficient spending, cost-minimization and value-maximization. But they almost completely overlooked another, equally important reason: information. In order to buy at least cost, You must be able to locate the names and prices of the relevant sellers. In order to maximize value, you must obtain relevant information about quality and potential substitute and complementary goods.

Economists have traditionally taken this ability for granted, which may be why the Friedmans mostly ignored the issue. Even the well-known economic treatments of the subject of information by Nobel laureates George Stigler and Gary Becker have begged key questions by assuming that buyers and sellers would automatically gather information up to the point where it was no longer economically sensible to continue. The missing link in these treatments is that they assume that consumers already know the nature and type of information that needs to be gathered. In other words, they are supposed to already know what they don’t know, and their only problem is how to (and to what extent) to find that out. Or, to borrow a form of expression currently popular, Stigler and Becker have assumed that the problem is one of “known unknowns.”

But the ghastly failures of regulation that led to the housing-market collapse, financial crisis and Great Recession show that the problem of “unknown unknowns” is at least as big. Regulators didn’t know various things – that sovereign debt and mortgage securities were now unsafe asset classes despite their history of safety, for example – and didn’t know that they didn’t know them. Their ignorance was disastrous. It rendered their good intentions useless.

The advantages of leaving most decisions to markets are that markets produce information to which governments have no ready access and markets leave more options open to decisionmakers. Free health-care markets allow doctors and patients to decide upon medical treatment, thereby generating vast quantities of information about how different individuals prefer and react to different regimes and medications. Most of this information is lost to government-dictated panels that formulate so-called “best practices” protocols under government health-care systems.

When regulators promulgate a rulemaking, they are betting all chips on their solution being the correct one. When they are wrong, as they were recently in housing and finance, the outcome can be catastrophic. Markets allow for differences of opinion among participants, thereby mitigating the results of mistakes. For example, banks who rigorously followed Basel banking guidelines and held ultra-safe assets like sovereign debt and mortgage securities stood a good chance of going bankrupt, while those who defied regulatory recommendations by diversifying their asset bases fared much better.

The Evolution of Unlimited Government Spending

The severe drawbacks of government spending are so important because the welfare-state model of unlimited government spending has gradually become dominant across the Western world. Starting with the Bismarck administration in Germany in the 1880s, spreading to Scandinavia and to post-World War II Great Britain, and then culminating with the triumph of big government in the U.S. in the 1960s, the trajectory of government spending has pointed skyward at the angle of a launched ballistic missile.

If government spending is so inefficient, why has it overpowered the Western fisc? For that matter, the current issue of The Economist notes that Asia is traveling the same path trod by the West. What Gresham’s Law of sovereign finance has achieved this perverse evolution?

Perhaps the answer can be found in the history of big government in the West. Economics developed as a science partly by exposing the shortcomings of government. These included the propensity to interfere with trade by taxing it or prohibiting it altogether, the futility of hamstringing markets with price ceilings and floors and the downside of printing money as a means of government finance. A conventional wisdom among economists relegated government action to a bare minimum of activities.

Unfortunately, reformers chafed at these restrictions on their ability to improve the lot of humanity. Their discontent coalesced around the idea that the bad effects of government action were a function of its form, not inherent to government itself. Price controls were developed by the Roman emperor Diocletian. Tariffs and quotas in international trade were the residue of the philosophy of mercantilism, followed by Spanish and French kings of the 16th and 17th centuries.

Surely dictatorship and monarchy were to blame for the backwardness of life under the ancien regime, not government per se. In contrast, prosperity and a large measure of peace had followed the advent of constitutional democracy in Europe and the U.S. If democracy could supplant authority in government, the good intentions and institutions of the democrats would overcome any inherent limitations of government and enable government to act more quickly, more surely and more comprehensively than private markets to undo the remaining evils of the world.

Alas, the 20th century taught us that professed good intentions are not nearly a prophylactic against the damage wrought by government unchained. Bismarck’s concessions to 19th-century socialism led to a German welfare state, which led to – Adolf Hitler, of all things. 20th-century liberals scoffed at F.A. Hayek’s warnings against economic planning as the precursor of totalitarianism, but the welfare state has inexorably reduced freedom and free markets as a glacier gradually engulfs all in its path.

The Collapse of the Government-Spending Machine

20th -century liberals in the Franklin Roosevelt administration envisioned a dynasty founded upon government spending. “Tax and tax, spend and spend, elect and elect” was their mantra. The formula has worked for nearly eighty years, not only in the U.S. but around the world.

Now the welfare state is foundering, largely on the issue of spending and its resulting debt. It is at least possible that if government spending were as efficient as private spending, we would tolerate the loss of freedom involved in exchange for the ostensible security provided by the welfare state. But government spending is so wildly inefficient and out of control that even if we were willing to sell our souls to Big Brother, none of us could afford the price tag. A tsunami of debt will drown the world monetary system and end the use of money for indirect exchange unless we make government our servant instead of our master.

Former British Prime Minister Margaret Thatcher once said that “the problem with socialism is that eventually you run out of other people’s money.” Although there are no theoretical limits on the ability of governments to create money, there are practical limits on our ability to absorb created money and government spending. Those limits are now in sight.

Most of the countries in the Eurozone have serious financial problems, either related to structural debt from overspending (Greece, Portugal, Belgium) or debt caused by bank bailouts (Spain, Italy, France, Great Britain, Ireland) or both. Only Germany and Switzerland are relatively problem-free, but they face the grim prospect of bailout out the rest. Banks in the U.S. are closely linked with European banks, particularly those in Great Britain. The need for spending reform is widely recognized, but the overspending has become so culturally entrenched that even a program of austerity, which is hardly thoroughgoing reform, raises the threat of riots and protests in the streets.

Only a few countries in the world have been prescient enough to recognize that the fool’s paradise is no longer inhabitable and must be depopulated via entitlement reform. Ironically, one of these is Sweden, which has passed its own version of Social Security privatization and has eschewed the Keynesian policies and monetary profligacy favored by American policymakers. Few would ever have predicted that Sweden and the U.S. would pass each other on the Road to Serfdom – going in opposite directions.