DRI-172 for week of 7-5-15: How and Why Did ObamaCare Become SCOTUSCare?

An Access Advertising EconBrief:

How and Why Did ObamaCare Become SCOTUSCare?

On June 25, 2015, the Supreme Court of the United States delivered its most consequential opinion in recent years in King v. Burwell. King was David King, one of various Plaintiffs opposing Sylvia Burwell, Secretary of Health, Education and Welfare. The case might more colloquially be called “ObamaCare II,” since it dealt with the second major attempt to overturn the Obama administration’s signature legislative achievement.

The Obama administration has been bragging about its success in attracting signups for the program. Not surprisingly, it fails to mention two facts that make this apparent victory Pyrrhic. First, most of the signups are people who lost their previous health insurance due to the law’s provisions, not people who lacked insurance to begin with. Second, a large chunk of enrollees are being subsidized by the federal government in the form of a tax credit for the amount of the insurance.

The point at issue in King v. Burwell is the legality of this subsidy. The original legislation provides for health-care exchanges established by state governments, and proponents have been quick to cite these provisions to pooh-pooh the contention that the Patient Protection and Affordable Care Act (PPACA) ushered in a federally-run, socialist system of health care. The specific language used by PPAACA in Section 1401 is that the IRS can provide tax credits for insurance purchased on “exchanges run by the State.” That phrase appears 14 times in Section 1401 and each time it clearly refers to state governments, not the federal government. But in actual practice, states have found it excruciatingly difficult to establish these exchanges and many states have refused to do so. Thus, people in those states have turned to the federal-government website for health insurance and have nevertheless received a tax credit under the IRS’s interpretation of statute 1401. That interpretation has come to light in various lawsuits heard by lower courts, some of which have ruled for plaintiffs and against attempts by the IRS and the Obama administration to award the tax credits.

Without the tax credits, many people on both sides of the political spectrum agree, PPACA will crash and burn. Not enough healthy people will sign up for the insurance to subsidize those with pre-existing medical conditions for whom PPACA is the only source of external funding for medical treatment.

To a figurative roll of drums, the Supreme Court of the United States (SCOTUS) released its opinion on June 25, 2015. It upheld the legality of the IRS interpretation in a 6-3 decision, finding for the government and the Obama administration for the second time. And for the second time, the opinion for the majority was written by Chief Justice John Roberts.

Roberts’ Rules of Constitutional Disorder

Given that Justice Roberts had previously written the opinion upholding the constitutionality of the law, his vote here cannot be considered a complete shock. As before, the shock was in the reasoning he used to reach his conclusion. In the first case (National Federation of Independent Businesses v. Sebelius, 2012), Roberts interpreted a key provision of the law in a way that its supporters had categorically and angrily rejected during the legislative debate prior to enactment and subsequently. He referred to the “individual mandate” that uninsured citizens must purchase health insurance as a tax. This rescued it from the otherwise untenable status of a coercive consumer directive – something not allowed under the Constitution.

Now Justice Roberts addressed the meaning of the phrase “established by the State.” He did not agree with one interpretation previously made by the government’s Solicitor General, that the term was an undefined term of art. He disdained to apply a precedent established by the Court in a previous case involving interpretation of law by administration agencies, the Chevron case. The precedent said that in cases where a phrase was ambiguous, a reasonable interpretation by the agency charged with administering the law would rule. In this case, though, Roberts claimed that since “the IRS…has no expertise in crafting health-insurance policy of this sort,” Congress could not possibly have intended to grant the agency this kind of discretion.

No, Roberts is prepared to believe that “established by the State” does not mean “established by the federal government,” all right. But he says that the Supreme Court cannot interpret the law this way because it will cause the law to fail to achieve its intended purpose. So, the Court must treat the wording as ambiguous and interpret it in such a way as to advance the goals intended by Congress and the administration. Hence, his decision for defendant and against plaintiffs.

In other words, he rejected the ability of the IRS to interpret the meaning of the phrase “established by the State” because of that agency’s lack of health-care-policy expertise, but is sufficiently confident of his own expertise in that area to interpret its meaning himself; it is his assessment of the market consequences that drives his decision to uphold the tax credits.

Roberts’ opinion prompted one of the most scathing, incredulous dissents in the history of the Court, by Justice Antonin Scalia. “This case requires us to decide whether someone who buys insurance on an exchange established by the Secretary gets tax credits,” begins Scalia. “You would think the answer would be obvious – so obvious that there would hardly be a need for the Supreme Court to hear a case about it… Under all the usual rules of interpretation… the government should lose this case. But normal rules of interpretation seem always to yield to the overriding principle of the present Court – the Affordable Care Act must be saved.”

The reader can sense Scalia’s mounting indignation and disbelief. “The Court interprets [Section 1401] to award tax credits on both federal and state exchanges. It accepts that the most natural sense of the phrase ‘an exchange established by the State’ is an exchange established by a state. (Understatement, thy name is an opinion on the Affordable Care Act!) Yet the opinion continues, with no semblance of shame, that ‘it is also possible that the phrase refers to all exchanges.’ (Impossible possibility, thy name is an opinion on the Affordable Care Act!)”

“Perhaps sensing the dismal failure of its efforts to show that ‘established by the State’ means ‘established by the State and the federal government,’ the Court tries to palm off the pertinent statutory phrase as ‘inartful drafting.’ The Court, however, has no free-floating power to rescue Congress from their drafting errors.” In other words, Justice Roberts has rewritten the law to suit himself.

To reinforce his conclusion, Scalia concludes with “…the Court forgets that ours is a government of laws and not of men. That means we are governed by the terms of our laws and not by the unenacted will of our lawmakers. If Congress enacted into law something different from what it intended, then it should amend to law to conform to its intent. In the meantime, Congress has no roving license …to disregard clear language on the view that … ‘Congress must have intended’ something broader.”

“Rather than rewriting the law under the pretense of interpreting it, the Court should have left it to Congress to decide what to do… [the] Court’s two cases on the law will be remembered through the years. And the cases will publish the discouraging truth that the Supreme Court favors some laws over others and is prepared to do whatever it takes to uphold and assist its favorites… We should start calling this law SCOTUSCare.”

Jonathan Adler of the much-respected and quoted law blog Volokh Conspiracy put it this way: “The umpire has decided that it’s okay to pinch-hit to ensure that the right team wins.”

And indeed, what most stands out about Roberts’ opinion is its contravention of ordinary constitutional thought. It is not the product of a mind that began at square one and worked its way methodically to a logical conclusion. The reader senses a reversal of procedure; the Chief Justice started out with a desired conclusion and worked backwards to figure out how to justify reaching it. Justice Scalia says as much in his dissent. But Scalia does not tell us why Roberts is behaving in this manner.

If we are honest with ourselves, we must admit that we do not know why Roberts is saying what he is saying. Beyond question, it is arbitrary and indefensible. Certainly it is inconsistent with his past decisions. There are various reasons why a man might do this.

One obvious motivation might be that Roberts is being blackmailed by political supporters of the PPACA, within or outside of the Obama administration. Since blackmail is not only a crime but also a distasteful allegation to make, nobody will advance it without concrete supporting evidence – not only evidence against the blackmailer but also an indication of his or her ammunition. The opposite side of the blackmail coin is bribery. Once again, nobody will allege this publicly without concrete evidence, such as letters, tapes, e-mails, bank account or bank-transfer information. These possibilities deserve mention because they lie at the head of a short list of motives for betrayal of deeply held principles.

Since nobody has come forward with evidence of malfeasance – or is likely to – suppose we disregard that category of possibility. What else could explain Roberts’ actions? (Note the plural; this is the second time he has sustained PPACA at the cost of his own integrity.)

Lord Acton Revisited

To explain John Roberts’ actions, we must develop a model of political economy. That requires a short side trip into the realm of political philosophy.

Lord Acton’s famous maxim is: “Power corrupts; absolute power corrupts absolutely.” We are used to thinking of it in the context of a dictatorship or of an individual or institution temporarily or unjustly wielding power. But it is highly applicable within the context of today’s welfare-state democracies.

All of the Western industrialized nations have evolved into what F. A. Hayek called “absolute democracies.” They are democratic because popular vote determines the composition of representative governments. But they are absolute in scope and degree because the administrative agencies staffing those governments are answerable to no voter. And increasingly the executive, legislative and judicial branches of the governments wield powers that are virtually unlimited. In practical effect, voters vote on which party will wield nominal executive control over the agencies and dominate the legislature. Instead of a single dictator, voters elect a government body with revolving and rotating dictatorial powers.

As the power of government has grown, the power at stake in elections has grown commensurately. This explains the burgeoning amounts of money spent on elections. It also explains the growing rancor between opposing parties, since ordinary citizens perceive the loss of electoral dominance to be subjugation akin to living under a dictatorship. But instead of viewing this phenomenon from the perspective of John Q. Public, view it from within the brain of a policymaker or decisionmaker.

For example, suppose you are a completely fictional Chairman of a completely hypothetical Federal Reserve Board. We will call you “Bernanke.” During a long period of absurdly low interest rates, a huge speculative boom has produced unprecedented levels of real-estate investment by banks and near-banks. After stoutly insisting for years on the benign nature of this activity, you suddenly perceive the likelihood that this speculative boom will go bust and some indeterminate number of these financial institutions will become insolvent. What do you do? 

Actually, the question is really more “What do you say?” The actions of the Federal Reserve in regulating banks, including those threatened with or undergoing insolvency, are theoretically set down on paper, not conjured up extemporaneously by the Fed Chairman every time a crisis looms. These days, though, the duties of a Fed Chairman involve verbal reassurance and massage as much as policy implementation. Placing those duties in their proper light requires that our side trip be interrupted with a historical flashback.

Let us cast our minds back to 1929 and the onset of the Great Depression in the United States. At that time, virtually nobody foresaw the coming of the Depression – nobody in authority, that is. For many decades afterwards, the conventional narrative was that President Herbert Hoover adopted a laissez faire economic policy, stubbornly waiting for the economy to recover rather than quickly ramping up government spending in response to the collapse of the private sector. Hoover’s name became synonymous with government passivity in the face of adversity. Makeshift shanties and villages of the homeless and dispossessed became known as “Hoovervilles.”

It took many years to dispel this myth. The first truthteller was economist Murray Rothbard in his 1962 book America’s Great Depression, who pointed out that Hoover had spent his entire term in a frenzy of activism. Far from remaining a pillar of fiscal rectitude, Hoover had presided over federal deficit spending so large that his successor, Democrat Franklin Delano Roosevelt, campaigned on a platform of balancing the federal-government budget. Hoover sternly warned corporate executives not to lower wages and officially adopted an official stance in favor of inflation.

Professional economists ignored Rothbard’s book in droves, as did reviewers throughout the mass media. Apparently the fact that Hoover’s policies failed to achieve their intended effects persuaded everybody that he couldn’t have actually followed the policies he did – since his actual policies were the very policies recommended by mainstream economists to counteract the effects of recession and Depression and were largely indistinguishable in kind, if not in degree, from those followed later by Roosevelt.

The anathematization of Herbert Hoover drover Hoover himself to distraction. The former President lived another thirty years, to age ninety, stoutly maintaining his innocence of the crime of insensitivity to the misery of the poor and unemployed. Prior to his presidency, Hoover had built reputation as one of the great humanitarians of the 20th century by deploying his engineering and organizational skills in the cause of disaster relief across the globe. The trashing of his reputation as President is one of history’s towering ironies. As it happened, his economic policies were disastrous, but not because he didn’t care about the people. His failure was ignorance of economics – the same sin committed by his critics.

Worse than the effects of his policies, though, was the effect his demonization has had on subsequent policymakers. We do not remember the name of the captain of the California, the ship that lay anchored within sight of the Titanic but failed to answer distress calls and go to the rescue. But the name of Hoover is still synonymous with inaction and defeat. In politics, the unforgivable sin became not to act in the face of any crisis, regardless of the consequences.

Today, unlike in Hoover’s day, the Chairman of the Federal Reserve Board is the quarterback of economic policy. This is so despite the Fed’s ambiguous status as a quasi-government body, owned by its member banks with a leader appointed by the President. Returning to our hypothetical, we ponder the dilemma faced by the Chairman, “Bernanke.”

Bernanke only directly controls monetary policy and bank regulation. But he receives information about every aspect of the U.S. economy in order to formulate Fed policy. The Fed also issues forecasts and recommendations for fiscal and regulatory policies. Even though the Federal Reserve is nominally independent of politics and from the Treasury department of the federal government, the Fed’s policies affect and are affected by government policies.

It might be tempting to assume that Fed Chairmen know what is going to happen in the economic future. But there is no reason to believe that is true. All we need do is examine their past statements to disabuse ourselves of that notion. Perhaps the popping of the speculative bubble that Bernanke now anticipates will produce an economic recession. Perhaps it will even topple the U.S. banking system like a row of dominoes and produce another Great Depression, a la 1929. But we cannot assume that either. The fact that we had one (1) Great Depression is no guarantee that we will have another one. After all, we have had 36 other recessions that did not turn into Great Depressions. There is nothing like a general consensus on what caused the Depression of the 1920s and 30s. (The reader is invited to peruse the many volumes written by historians, economic and non-, on the subject.) About the only point of agreement among commentators is that a large number of things went wrong more or less simultaneously and all of them contributed in varying degrees to the magnitude of the Depression.

Of course, a good case might be made that it doesn’t matter whether Fed Chairman can foresee a coming Great Depression or not. Until recently, one of the few things that united contemporary commentators was their conviction that another Great Depression was impossible. The safeguards put in place in response to the first one had foreclosed that possibility. First, “automatic stabilizers” would cause government spending to rise in response to any downturn in private-sector spending, thereby heading off any cumulative downward movement in investment and consumption in response to failures in the banking sector. Second, the Federal Reserve could and would act quickly in response to bank failures to prevent the resulting reverse-multiplier effect on the money supply, thereby heading off that threat at the pass. Third, bank regulations were modified and tightened to prevent failures from occurring or restrict them to isolated cases.

Yet despite everything written above, we can predict confidently that our fictional “Bernanke” would respond to a hypothetical crisis exactly as the real Ben Bernanke did respond to the crisis he faced and later described in the book he wrote about it. The actual and predicted responses are the same: Scare the daylights out of the public by predicting an imminent Depression of cataclysmic proportions and calling for massive government spending and regulation to counteract it. Of course, the real-life Bernanke claimed that he and Treasury Secretary Henry O’Neill correctly foresaw the economic future and were heroically calling for preventive measures before it was too late. But the logic we have carefully developed suggests otherwise.

Nobody – not Federal Reserve Chairmen or Treasury Secretaries or California psychics – can foresee Great Depressions. Predicting a recession is only possible if the cyclical process underlying it is correctly understood, and there is no generally accepted theory of the business cycle. No, Bernanke and O’Neill were not protecting America with their warning; they were protecting themselves. They didn’t know that a Great Depression was in the works – but they did know that they would be blamed for anything bad that did happen to the economy. Their only way of insuring against that outcome – of buying insurance against the loss of their jobs, their professional reputations and the possibility of historical “Hooverization” – was to scream for the biggest possible government action as soon as possible. 

Ben Bernanke had been blasé about the effects of ultra-low interest rates; he had pooh-poohed the possibility that the housing boom was a bubble that would burst like a sonic boom with reverberations that would flatten the economy. Suddenly he was confronted with a possibility that threatened to make him look like a fool. Was he icy cool, detached, above all personal considerations? Thinking only about banking regulations, national-income multipliers and the money supply? Or was he thinking the same thought that would occur to any normal human being in his place: “Oh, my God, my name will go down in history as the Herbert Hoover of Fed chairmen”?

Since the reasoning he claims as his inspiration is so obviously bogus, it is logical to classify his motives as personal rather than professional. He was protecting himself, not saving the country. And that brings us to the case of Chief Justice John Roberts.

Chief Justice John Roberts: Selfless, Self-Interested or Self-Preservationist?

For centuries, economists have identified self-interest as the driving force behind human behavior. This has exasperated and even angered outside observers, who have mistaken self-interest for greed or money-obsession. It is neither. Rather, it merely recognizes that the structure of the human mind gives each of us a comparative advantage in the promotion of our own welfare above that of others. Because I know more about me than you do, I can make myself happier than you can; because you know more about you than I do, you can make yourself happier than I can. And by cooperating to share our knowledge with each other, we can make each other happier through trade than we could be if we acted in isolation – but that cooperation must preserve the principle of self-interest in order to operate efficiently.

Strangely, economists long assumed that the same people who function well under the guidance of self-interest throw that principle to the winds when they take up the mantle of government. Government officials and representatives, according to traditional economics textbooks, become selfless instead of self-interested when they take office. Selflessness demands that they put the public welfare ahead of any personal considerations. And just what is the “public welfare,” exactly? Textbooks avoided grappling with this murky question by hiding behind notions like a “social welfare function” or a “community indifference curve.” These are examples of what the late F. A. Hayek called “the pretense of knowledge.”

Beginning in the 1950s, the “public choice” school of economics and political science was founded by James Buchanan and Gordon Tullock. This school of thought treated people in government just like people outside of government. It assumed that politicians, government bureaucrats and agency employees were trying to maximize their utility and operating under the principle of self-interest. Because the incentives they faced were radically different than those faced by those in the private sector, outcomes within government differed radically from those outside of government – usually for the worse.

If we apply this reasoning to members of the Supreme Court, we are confronted by a special kind of self-interest exercised by people in a unique position of power and authority. Members of the Court have climbed their career ladder to the top; in law, there are no higher rungs. This has special economic significance.

When economists speak of “competition” among input-suppliers, we normally speak of people competing with others doing the same job for promotion, raises and advancement. None of these are possible in this context. What about more elevated kinds of recognition? Well, there is certainly scope for that, but only for the best of the best. On the current court, positive recognition goes to those who write notable opinions. Only Judge Scalia has the special talent necessary to stand out as a legal scholar for the ages. In this sense, Judge Scalia is “competing” with other judges in a self-interested way when he writes his decisions, but he is not competing with his fellow judges. He is competing with the great judges of history – John Marshall, Oliver Wendell Holmes, Louis Brandeis, and Learned Hand – against whom his work is measured. Otherwise, a judge can stand out from the herd by providing the deciding or “swing” vote in close decisions. In other words, he can become politically popular or unpopular with groups that agree or disagree with his vote. Usually, that results in transitory notoriety.

But in historic cases, there is the possibility that it might lead to “Hooverization.”

The bigger government gets, the more power it wields. More government power leads to more disagreement about its role, which leads to more demand to arbitration by the Supreme Court. This puts the Court in the position of deciding the legality of enactments that claim to do great things for people while putting their freedoms and livelihoods in jeopardy. Any judge who casts a deciding vote against such a measure will go down in history as “the man who shot down” the Great Bailout/the Great Health Care/the Great Stimulus/the Great Reproductive Choice, ad infinitum.

Almost all Supreme Court justices have little to gain but a lot to lose from opposing a measure that promotes government power. They have little to gain because they cannot advance further or make more money and they do not compete with J. Marshall, Holmes, Brandeis or Hand. They have a lot to lose because they fear being anathematized by history, snubbed by colleagues, picketed or assassinated in the present day, and seeing their children brutalized by classmates or the news media. True, they might get satisfaction from adhering to the Constitution and their personal conception of justice – if they are sheltered under the umbrella of another justice’s opinion or they can fly under the radar of media scrutiny in a relatively low-profile case.

Let us attach a name to the status occupied by most Supreme Court justices and to the spirit that animates them. It is neither self-interest nor selflessness in their purest forms; we shall call it self-preservation. They want to preserve the exalted status they enjoy and they are not willing to risk it; they are willing to obey the Constitution, observe the law and speak the truth but only if and when they can preserve their position by doing so. When they are threatened, their principles and convictions suddenly go out the window and they will say and do whatever it takes to preserve what they perceive as their “self.” That “self” is the collection of real income, perks, immunities and prestige that go with the status of Supreme Court Justice.

Supreme Court Justice John Roberts is an example of the model of self-preservation. In both of the ObamaCare decisions, his opinions for the majority completely abdicated his previous conservative positions. They plumbed new depths of logical absurdity – legal absurdity in the first decision and semantic absurdity in the second one. Yet one day after the release of King v. Burwell, Justice Roberts dissented in the Obergefell case by chiding the majority for “converting personal preferences into constitutional law” and disregarding clear meaning of language in the laws being considered. In other words, he condemned precisely those sins he had himself committed the previous day in his majority opinion in King v. Burwell.

For decades, conservatives have watched in amazement, scratching their heads and wracking their brains as ostensibly conservative justices appointed by Republican presidents unexpectedly betrayed their principles when the chips were down, in high-profile cases. The economic model developed here lays out a systematic explanation for those previously inexplicable defections. David Souter, Anthony Kennedy, John Paul Stevens and Sandra Day O’Connor were the precursors to John Roberts. These were not random cases. They were the systematic workings of the self-preservationist principle in action.

DRI-287 for week of 9-8-13: Stop the Presses! ‘Government Does Not Spend Money Wisely.’

An Access Advertising EconBrief:

Stop the Presses! ‘Government Does Not Spend Money Wisely.’

When somebody tries to persuade you that they are smart by telling you something you already know, you are not impressed. When they insist that they just learned it after spending years wielding their expertise on the subject, you react by considering them stupid rather than smart. Alternatively, you suspect them of dishonesty. And when your informants turn out to have been highly placed officials in the government, you fear for the future of the nation.

That is the position in which Peter Orszag and John Bridgeland place readers of their article, “Can Government Play Moneyball?” which appears in the current issue of The Atlantic magazine. Orszag and Bridgeland have determined that “less than $1 out of every $100 of government spending is backed by even the most basic evidence that the money is being spent wisely.” To a substantial plurality of Americans – perhaps even a thin majority – this is about as surprising as the fact that the sun rose in the east this morning. But it is ostensibly a stunning revelation to the authors, who profess that “we were flabbergasted by how blindly the federal government spends.”

Are the authors anthropologists who just now returned to the United States after spending the last 50 years on an isolated tropical island, studying the native culture? As John Wayne might put it, not hardly. Both men are “former officials in the administrations of Barack Obama (Peter Orszag) and George W. Bush (John Bridgeland).” Both have sterling educational pedigrees (one in economics, one in law) that equip them to understand the logic of markets and the workings of government.

Both inhabit the belly of the Establishment beast. Orszag is a prep-school graduate and cum-laude PhD product of the London School of Economics. He was Director of both the Congressional Budget Office (CBO; 2007-2008) and the President’s Office of Management and Budget (OMB; 2009-2010). Bridgeland graduated from Harvard University and the University Of Virginia School Of Law and held down several positions in the Bush administration, including Assistant to the President, Director of USA Freedom Corps and Director of the White House’s Domestic Policy Council. He also taught a seminar on Presidential decision-making at Harvard’s Kennedy School of Government. Since 9/11, he oversaw over $1 billion worth of spending on domestic and international service programs. He currently heads a public-policy organization (Civic Enterprises) and vice-chairs a non-profit business created to eradicate malaria in less-developed countries. He is also a noted educational activist who drew attention to the “silent epidemic” of high-school dropouts.

Given their backgrounds, we can assume that Orszag and Bridgeland are not fools. In the first paragraph of their article, they state that “the federal government” is “where spending decisions are largely based on good intentions, inertia, hunches, partisan politics, and personal relationships.” How, then, can Mr. Orszag and Mr. Bridgeland possibly claim to be surprised by what they found when they went to Washington? And what inferences should we draw from their attitude?

The Authors Already Knew That the Federal Government Spends Unwisely

From childhood on, the authors’ own experience already ratified the idiocy of federal- government spending long before they set foot in Washington, D.C. They experienced Social Security withholding from their earliest paychecks. Their schooling taught them the rudiments of the Social Security system and its mandatory character. Orszag’s economics training introduced him to Paul Samuelson’s famous article rejoicing in the Ponzi-like, pay-as-you-go funding mechanism, which Samuelson considered a stroke of genius because the U.S. birth rate was then producing ever-larger streams of payers relative to recipients. And both authors have watched the ensuing baby bust drive the system into actuarial insolvency, bringing the day of default ever closer. Orszag and Bridgeland know only too well that Social Security has long been touted as the crown jewel of 20th-century liberalism’s welfare state. LIkewise, both men have observed Medicare and Medicaid approaching a similar fate after previously attaining similarly sacrosanct status. These entitlement programs are de facto examples of government spending even though they are off-budget in the technical accounting sense. After observing these examples, why should Messrs. Orszag and Bridgeland have been shocked by anything else they found?

“In other types of American enterprise, spending decisions are usually quite sophisticated,” the authors observe. They are referring to American business, the vineyard in which both toiled prior to government service and to which they retreated to recover from the shock of their exposure to profligacy and waste. Corporations formulate a capital budget, in which potential investment projects are evaluated by comparing the present value of their costs and benefits. Shareholders calculate the best alternative use of their money in investment of equal risk and compare it with their rate of return, enabling them to judge the wisdom of their investment choice. Sole proprietors gauge the best alternative use of their labor time – perhaps working as an employee – and compare it to the earnings from their business. These are the ways used to gauge the wisdom and effectiveness of spending decisions in the private sector.

We know these methods work well because the United States became the world’s leading economy midway through the 20th century after carving a small foothold on the North American continent in the 17th century. Other countries imitated our methods and enjoyed similar success. Countries rejecting our methods generally failed. Even the few Scandinavian countries that built successful welfare states did it by utilizing relatively free markets, while countries that moved away from free markets by nationalizing industry (such as Great Britain and Argentina) experienced drastic declines in their living standards).

The federal government – and government generally – has no rational method for evaluating its spending decisions. Private businesses spend money in order to create value for consumers. They gauge the success or failure of their spending by the size of their profits. The federal government ostensibly spends money to benefit the same people served by private business. But the federal government does not earn profit, thus cannot gauge its success by its profits. There is no true owner of its assets – when something is “publicly owned,” nobody owns it and nobody has an incentive to maintain it, husband its productive potential and maximize its value. The government does not normally sell its output to private citizens at prices that are free to fluctuate in accordance with the supply and demand for that output; thus, it cannot use price fluctuations to gauge the success of its efforts. Even if politicians wanted to, they have no way to gather the information necessary to tailor government spending to the desires of all their constituents, in the fashion of markets. Since no human being or institution possesses a complete picture of reality, both incentives and institutions must be favorably attuned to allow our subjective perceptions to satisfy individual wants. Free, competitive markets calibrate the key variables to produce this result while government fails utterly. The last thing politicians, bureaucrats and government employees can afford is a thoroughgoing analysis of government programs, their results and the reasons for them.

So politicians clasp hands earnestly to their breasts and swear to spend money for the benefit of “the 99%, not the 1%,” or “Main Street, not Wall Street.” That is, they profess “good intentions.” They pass baseline budgeting rules declaring that spending on federal programs must always rise by a certain percentage every year, no matter what (e.g., “inertia.”) Politicians spend money on electric cars and wind farms and ethanol subsidies because they have “hunches” that these measures are the wave of the future. First-year legislators are told that they must agree to support the spending programs of their incumbent colleagues in order to gain support for their own legislative proposals, thereby establishing “partisan politics” as a potent force behind wasteful spending.

The authors actually provide a specific example to bolster their choice of “personal relationships” as a roadblock to wise spending. In 2003, Bridgeland and officials at OMB judged that the Even Start Family Literacy Program was a waste of money. Why? Because the children and parents who participated in it showed no more gains in literacy than did those in a control group used for comparison. So the program was marked for elimination. “But Even Start was founded in 1989 by Bill Goodling, a well-liked Republican who had been the Chairman of the House Education and the Workforce Committee, and had previously served as a teacher, principal, and school superintendent in Pennsylvania. So Congress continued to fund this ineffective, if well-meaning, program to the tune of more than $1 billion over the life of the Bush administration.”

Orszag and Bridgeland left out a few important spending determinants from their list. For years, “fraud” and “abuse” have figured prominently in task-force reports on federal-government spending. Both men will recall the infamous “bridge to nowhere” of a few years ago. Fraud has risen to mammoth proportions in the Medicare and Social Security programs. Nothing was said about “graft” in the article, but the movie Mr. Smith Goes to Washington was released before both authors were born and it is safe to assume that both have seen it.

All in all, the faux outrage expressed by Orszag and Bridgeland lacks credibility. Their years of service in government allowed them to fill in the blanks of their indictment, but produced no other added value. They knew going in that the federal government was every bit as wasteful as they now portray it. Their disingenuous attitude – I’m shocked – shocked! – to find gambling going on here! – is borrowed from Claude Rains in Casablanca.

This is bad enough. Their proposed solution is worse. Citing “baseball’s transformation into ‘Moneyball’ as a case of private-sector spending sophistication, they aver that “the lessons of moneyball could make our government better.” You heard right. They don’t want to make government spend less. They want to make it spend better.

“Moneyball” in Government? Why Not “Nomoneyball?”

Orszag and Bridgeland maintain that “the moneyball formula in baseball – replacing scouts’ traditional beliefs and biases…with data-intensive studies of what skills actually contribute most to winning – is just as applicable to the battle against out-of-control health-care costs.” Their argument for assembling expert knowledge and applying it via central planning goes back at least as far as the “soviet of experts” advocated by institutional economist Thorstein Veblen in the early 1900s.

The problem is that it isn’t expert knowledge that is lacking as much as the “particular knowledge of time and place” conveyed by the price system and utilized best by the individual patient and doctor. That truth has already dawned on many doctors, patients and policymakers such as John Goodman of the National Center for Policy Analysis. Obamacare already embodies the demand for government-chosen best-practices medicine, but those choices will be made using the criterion of statistical significance. This is a recipe for disaster, since medicine in fields such as oncology has moved rapidly in the direction of individually tailored drugs and therapies rather than the “one-size-fits-all” approach implied by statistical regression. By its very nature, government action must involve coercion when flexibility and feedback are what is most urgently needed.

Thus, by applying the “moneyball” formula to health care, the authors are actually embracing the pretense of effectiveness in spending rather than the genuine article. They should be arguing for a return to the price system instead. “It is indisputable, however, that a move toward payments based on performance would harm some businesses. If most of your profits come from a medical device or procedure that …doesn’t work all that well, you’re likely to resist anyone sorting through what works and what doesn’t, never mind changing payment accordingly. Health-care interests are wise to invest millions of dollars in campaign contributions and lobbying to protect billions of dollars in profits.” The authors have just made the case against involving the government in health care and in favor of allowing free markets to work. Free markets are the best device ever invented for enforcing “pay for performance.” Leaving government out would eliminate campaign contributions and lobbying completely. As we will soon see, the authors’ method would accomplish none of these objectives.

The Moneyball Hook

The selection of “Moneyball” as the authors’ marketing hook reveals their lack of purpose. They try to persuade their readers by connecting emotionally rather than rationally. Moneyball was a tactic used successfully by one baseball team (the Oakland Athletics) during one pennant race. Its name derives from a book, but the authors picked it because of the successfully movie adapted from it.

Selling “free markets” would make perfect logical sense, since this is the same device that disciplines spending for thousands of businesses around the world. It has worked for centuries. But as a marketing concept, it has no sex appeal. No recent movie used it; no top-40 recording gyrated to it; no leading rap group is named for it. And the authors are only trying to sell a concept; they are not really trying to succeed in reducing spending or improving its quality.

How do we know the authors are not really trying? They tell us – not in so many words, but indirectly.

The System is Rigged Against Spending Changes or Reductions

The authors relate the history of so-called attempts to evaluate government-spending programs and jettison the ones that aren’t working. During the Clinton administration, the Government Performance and Results Act directed Congress “to provide for the establishment of strategic planning and performance measurement in the Federal Government.” The use of vague, circumlocutory language is a classic bureaucratic way of avoiding clarity and specificity – in this case, of avoiding commitment to eliminating wasteful spending. Sure enough, no link was established between performance assessments and continued funding by Congress.

The Bush administration, egged on by Bridgeland, established the Program Assessment Rating Tool (PART). This specifically identified programs that were not working as intended and tried to get them improved or discontinued.

Or did it? It seems that the assessment process has five possible outcomes. A program could be declared “effective,” “moderately effective,” “adequate” or “ineffective.” The fifth possibility was “results not demonstrated” due to insufficient data for evaluation. It is clear that four of the five possible outcomes were designed to justify continued funding, while even a rating of “ineffective” would not necessarily lead to termination of a program but would rather call for “improvement.” Not surprisingly, of the 1,000 programs assessed during Bush’s tenure, only 3% were adjudged “ineffective.” And Congress apparently ignored OMB’s recommendations to reform or abolish even that paltry percentage.

The reader might pause to consider that restaurants serving very good food go broke every day; an Internet review of “effective,” “moderately effective” or “adequate” would probably be the kiss of death in that business. And the people doing the rating have only their own inner fidelity to truth and honesty as an incentive to be honest in their evaluations – the institutional incentives for the federal government to discipline its own spending range from slim to none.

Beginning in 1990, the federal government has actually tested 11 large social programs, comprising some $2 billion in aggregate annual spending; using randomized controlled trials of effectiveness. The trials tested the results of spending by comparing the effects to those experienced by a control group who did not receive the benefits of the spending. 10 of the 11 programs showed either no effects on recipients or only a weak positive effect.

In some cases, programs were found to do positive harm. Government funding of so-called “Scared Straight” programs was found to “make kids about 12 percent more likely to commit a crime.” 21st Century Community Learning Centers, an afterschool program designed to improve academic performance of elementary-level students, has failed to affect academic outcomes but has increased the number of school suspensions and other behavioral infractions. Even this verdict of counterproductive was not enough to kill funding for the Learning Centers, which were saved by the intercession of then-gubernatorial candidate Arnold Schwarzenegger of California and are still receiving $1 billion in federal funds today.

Are the Authors As Mad As Hell? Are They Not Going to Take It Any More? Not Even Close

 

You might suppose that by now Messrs. Orszag and Bridgeland are mad as hell and aren’t going to take this any more. No, unlike Howard Beale, Network‘s “Mad Prophet of the Airwaves,” the authors are mildly irritated and ready to offer constructive alternatives. By golly, a non-profit organization called Results for America “is calling for reserving 1 percent of program spending for evaluation: for every $99 we spend on a program …we would spend $1 making sure the program actually works.” Don’t you just love those calls for action? Don’t you just love those non-profits?

“The more evidence we have, the stronger it is; and the more systematically it is presented, the harder it will be for lawmakers to ignore.” Uh… haven’t you just been telling us that they’ve done just that, for decades? “Still, linking evaluation to program funding will be tough, as both of us have seen in practice, again and again.” Aha. So the upshot of the authors’ ineffectual whining is that we’re supposed to do more of what’s conclusively failed in the past, but expect a different outcome this time. This is the operational definition of insanity – as if we weren’t already being driven insane by the actions of government gone wild.

Grasping at straws, the authors ask hopefully (drum roll, please):”What if we had a Moneyball Index, easily accessible to voters and the media, that rated each member of Congress on their votes to fund programs that have been shown not to work?” At ages 44 (Orszag) and 53 (Bridgeland), respectively, the authors are not too young to recall Sen. William Proxmire of Wisconsin, whose “Golden Fleece” awards were designed to attract media attention to Congressional spending projects that Proxmire felt were wasteful or even counterproductive. Awardees included a science grant to study why people fall in love, a study of Peruvian brothels (which proved particularly popular with its researchers) and a study of the buttock dimensions of airline stewardesses. (Proxmire’s awards were handed out from 1975 to 1987, when the job description of “airline stewardess” was still operative.)

The authors should ponder Proxmire’s fate. After a dozen-year run in which he actually succeeded in killing a few small-scale raids on the public treasury like the above examples, Proxmire’s career ended at age 74 in 1989. He didn’t retire due to age; his well-publicized physical-fitness regime made him perhaps the best-conditioned Senator. Instead, he was “retired” by the government-spending Empire, which struck back at him electorally when he finally lampooned a few too many of their pet projects. It should also be remembered that Proxmire, the scourge of wasteful spending, was a consistent supporter of dairy price-supports.

So much for “public shaming.” There is little point in shaming people who have no shame. Is there a group more institutionally bereft of shame than the U.S. Congress, whose poll approval rating hovers near single digits yet adamantly refuses to reform themselves?

Too Little, Too Late, Too Ineffectual

Figuratively speaking, Orszag and Bridgeland are canvassing the Titanic lifeboats to recruit passengers to go on iceberg watch. If there was ever a time to fussily insist upon substituting wise spending for wasteful spending, it expired nearly a century ago. Today, the Federal Reserve is directly buying new Treasury debt to the tune of a trillion dollars per year. Banks are husbanding $4 trillion or so in excess reserve accounts. The Fed has been desperately pegging short-term interest rates as close to zero as possible for years. It has not been trying to “stimulate the economy,” because it has encouraged banks to hold the reserves rather than loaning them out. (Had they been loaned out, we would have experienced hyperinflation.) Rather, it has been trying to buy time during which the Treasury can pay the lowest possible interest on outstanding debt, so as to keep interest payments from overwhelming the federal budget.

Orszag and Bridgeland should be advancing on the federal budget with a meat ax, intending to decapitate entire cabinet-level departments. They should be screaming at the top of their lungs in press conferences, not daydreaming about cutesy marketing ploys like “Moneyball” in the pages of a left-leaning opinion journal like The Atlantic (circulation: 400,000+). And they should have started their campaigns while still in government, not after bailing out to the private sector. The fact that two former “spending czars” – heads of CBO, OMB and DPA – were too timid to speak out against runaway spending while in government and still didn’t come within shouting distance of the whole truth years after leaving the bureaucracy tells us that there is no hope at all of reforming government from the inside. That is, the real shocker about this article is not even what the authors say; it is what they have refused to say.

What really motivated Orszag and Bridgeland to speak out? After all, they could have simply sat silent. Perhaps they were caught between two alternatives, like Buridan’s Ass. As members in good standing of the Establishment, they couldn’t simply up and admit that the federal government is one big welfare project – not for the purported beneficiaries, the program recipients, but for government employees who pick up paychecks while performing work of little or no true value. Yet neither could they do nothing while watching their country go down the drain. This half-hearted, empty-headed response was all they could muster.

DRI-345 for week of 9-9-12: Other People’s Money

An Access Advertising EconBrief:

 

Other People’s Money

The elephant in the room in any political discussion is the ongoing debt crisis in Europe and the impending one in the U.S. Turn over the debt coin to reveal spending; the two go together like dissipation and death.

We strive to understand the complex and unfamiliar by likening it to the familiar. It is commonplace to read explications of government spending and debt that treat the government as one great big corporation or, worse, as the head of our national household. In fact, it is just those differences between the behavior of government and our own daily lives that give rise to misunderstanding.

In their immortal bestselling text Free to Choose (companion piece to a hit 1980 PBS series), Milton and Rose Friedman developed a beautifully concise matrix to illustrate the differences between government and private spending. Herein lies the key to the avalanche of debt poised to engulf the world.

The Spending Matrix

In a modern economy, money serves as a lubricant to the exchange of goods and services between individuals and businesses. The income received by households for supplying input services to businesses and government forms the basis for expenditure on consumption goods and services. Income not consumed is saved and invested; an excess of current consumption spending over income constitutes dissaving and is financed by borrowing to incur debt. Government “income” is derived from tax revenue and expenditure is undertaken to provide consumption and investment benefits to citizens. Once again, any excess of expenditure over income must be financed, either by money creation or borrowing to incur debt.

The Friedmans explain why the efficiency of the expenditure process depends crucially on the origin of the money being spent and the identity of the spending beneficiaries. First, they identify the four basic categories of spending. Money is the vehicle for spending. Either the money is yours (originating via income you earned) or it is supplied by somebody else via the intermediation of government, which levies taxes and gives you the proceeds. Either you are spending the money on yourself or you are spending it on somebody else. The possibilities reduce to four spending categories.

Category I denotes the case in which you are spending your own money on yourself. In this situation, spending is at its most efficient. The word “efficient” has two everyday meanings, both of which are germane here. First, you spend your own money efficiently because you have the strongest possible incentive not to spend any more than necessary for a given quantity and quality of good or service. This is so because there is nobody whose welfare means more to you than yours. (For our purposes, we can stipulate the content of “you” to include members of your household.) Second, you want to get the most value for your expenditure – that is, for a given expenditure you want to get the best quality and most appropriate items. Again, this makes sense because nobody means more to you than you.

In Category II spending, you are spending your money on somebody else. Your spending will be efficient in the first sense – you will still strive to minimize the cost of a given quality of purchases – but not the second sense. You are not obsessively concerned with value-maximization because you yourself are not consuming the goods your purchase – somebody else is. Any doubt about the truth of this observation will yield to a study of the yearly gift-return statistics during the Christmas season.

In Category III spending, you are spending somebody else’s money on yourself. Now you will strive to get the best possible value for your money, but you will not be rigorously concerned with cost-minimization because you are not spending your own money – you are spending somebody else’s money. Your utility or satisfaction depends on the goods and services you consume, so you have every incentive to acquire goods and maximize their value, but your utility in unaffected by the efficiency with which other people’s money is spent. Thus, you have no incentive to waste time worrying about it.

Category IV spending is the kind undertaken by government. Legislators spend somebody else’s money on somebody else. Consequently, they have no incentive to spend efficiently in either sense. They are not spending their own money and they do not themselves benefit from the expenditures, so their consumption is not dependent on the expenditures. Thus, legislators do not minimize the cost of purchasing a given quality of goods using somebody else’s money, nor to they maximize the value of the goods they purchase for others to consume.

Government Spending

During the 20th century, political economy saw a worldwide trend toward increase in the size and activity of government. The duties of government came to include not merely tasks that private business and individuals were unable to perform, such as national defense, but activities that had heretofore been confined to the private sector, such as the provision and regulation of medical care.

Proponents of bigger government hailed this trend while devotees of limited government deplored it. In terms of our model of spending, the substitution of government for the private sector means a change in spending category and in the relative efficiency with which money is spent. Evaluating this change is one of the best ways of deciding whether more and bigger government is good or bad.

Most government spending is Category IV spending. Legislators appropriate large sums of money from the Treasury and spend it for the benefit of large groups of people or the nation at large. Sometimes the money being spent has a traceable relationship to money raised from the public; sometimes it does not. Sometimes the legislators actually contribute to the funds from which the spending is drawn; sometimes they don’t. (For years, Federal employees were exempt from Social Security and had their own retirement plan. Likewise, state employees hired before 1986 do not contribute to Medicare.) But an individual legislator’s percentage share of the spending and benefits is so minute as to be imperceptible; other incentives swamp the cost and value considerations cited above for legislators.

Category IV spending is the least efficient kind of spending in both senses of the word. It is also the kind of spending most conducive to fraud. Fraud is generally thought of as “deceit” or “trickery,” but its legal definition requires that the perpetrator lacked any intention of performing or providing the contracted-for good or service. Intentions are best gauged and fulfilled by their possessor; by definition, one cannot defraud oneself legally, however self-deceptive one’s actions may be psychologically. Thus, Category I spending is proof against fraud. Category II and III spending has at least the safeguard that you are vetting one end of the transaction, although this is not absolute proof against fraud. But Category IV spending is an open invitation to fraud, since nobody has a direct interest in efficient spending on either end of the transaction.

A Case Study in Government Overspending: Medicare

The Medicare program is a classic case of inefficient government spending in general and an invitation to fraud in particular. Medicare’s general inefficiency lies in its Category IV status. The recipients of the Medicare program are (as a first approximation) elderly Americans. But program expenditures are ultimately determined by government, which approves covered procedures and global budgets. Efficient spending requires patients to view doctor visits, tests and medical procedures as expenses, buying them only when their prospective value outweighs their cost. Doctors should aid patients in determining prospective benefits. Instead, the program grossly distorts the true economic costs of medical treatment by understating them. Doctors have no incentive to seek least-cost treatment regimes since they know that patients pay only a relatively small ($140) deductible and 20% of subsequent treatment costs. Patients have little incentive to minimize costs since they pay so little at the margin for additional treatment. This alone is a formula for overspending – which is just what has happened around the world, forcing most countries to ration medical treatment inefficiently by queue and government fiat rather than efficiently through the price system.

Total Medicare expenditures exceed $500 billion annually. Fraud detections of just under $50 billion are probably underestimates, but nobody knows the true extent of Medicare fraud. One of the most prevalent forms is billing fraud, in which providers bill the government for services not performed. In these cases, the government is spending taxpayers’ money for the ostensible benefit of patients but the actual benefit of providers. Since patients do not pay the bill, they have no incentive to detect or object to the fraudulent payments. Sometimes fraudsters will include patients in the scheme, in order to reduce the likelihood of detection. Since patients are not paying the bill, they do not lose from undetected fraud but do gain from kickbacks.

What about the fact that Medicare recipients are also (often still) taxpayers? Since no action taken by Medicare recipients can affect taxes already collected from taxpayers, recipients quite correctly view those taxes are sunk costs. They ignore them and abuse the system just as much as any non-taxpayer. And their behavior is economically rational.

The Limitations of Government

Why are you a more efficient spender of your money than government? The Friedmans accurately pinpointed one key reason: incentives. You have the strongest incentive to achieve both kinds of efficient spending, cost-minimization and value-maximization. But they almost completely overlooked another, equally important reason: information. In order to buy at least cost, You must be able to locate the names and prices of the relevant sellers. In order to maximize value, you must obtain relevant information about quality and potential substitute and complementary goods.

Economists have traditionally taken this ability for granted, which may be why the Friedmans mostly ignored the issue. Even the well-known economic treatments of the subject of information by Nobel laureates George Stigler and Gary Becker have begged key questions by assuming that buyers and sellers would automatically gather information up to the point where it was no longer economically sensible to continue. The missing link in these treatments is that they assume that consumers already know the nature and type of information that needs to be gathered. In other words, they are supposed to already know what they don’t know, and their only problem is how to (and to what extent) to find that out. Or, to borrow a form of expression currently popular, Stigler and Becker have assumed that the problem is one of “known unknowns.”

But the ghastly failures of regulation that led to the housing-market collapse, financial crisis and Great Recession show that the problem of “unknown unknowns” is at least as big. Regulators didn’t know various things – that sovereign debt and mortgage securities were now unsafe asset classes despite their history of safety, for example – and didn’t know that they didn’t know them. Their ignorance was disastrous. It rendered their good intentions useless.

The advantages of leaving most decisions to markets are that markets produce information to which governments have no ready access and markets leave more options open to decisionmakers. Free health-care markets allow doctors and patients to decide upon medical treatment, thereby generating vast quantities of information about how different individuals prefer and react to different regimes and medications. Most of this information is lost to government-dictated panels that formulate so-called “best practices” protocols under government health-care systems.

When regulators promulgate a rulemaking, they are betting all chips on their solution being the correct one. When they are wrong, as they were recently in housing and finance, the outcome can be catastrophic. Markets allow for differences of opinion among participants, thereby mitigating the results of mistakes. For example, banks who rigorously followed Basel banking guidelines and held ultra-safe assets like sovereign debt and mortgage securities stood a good chance of going bankrupt, while those who defied regulatory recommendations by diversifying their asset bases fared much better.

The Evolution of Unlimited Government Spending

The severe drawbacks of government spending are so important because the welfare-state model of unlimited government spending has gradually become dominant across the Western world. Starting with the Bismarck administration in Germany in the 1880s, spreading to Scandinavia and to post-World War II Great Britain, and then culminating with the triumph of big government in the U.S. in the 1960s, the trajectory of government spending has pointed skyward at the angle of a launched ballistic missile.

If government spending is so inefficient, why has it overpowered the Western fisc? For that matter, the current issue of The Economist notes that Asia is traveling the same path trod by the West. What Gresham’s Law of sovereign finance has achieved this perverse evolution?

Perhaps the answer can be found in the history of big government in the West. Economics developed as a science partly by exposing the shortcomings of government. These included the propensity to interfere with trade by taxing it or prohibiting it altogether, the futility of hamstringing markets with price ceilings and floors and the downside of printing money as a means of government finance. A conventional wisdom among economists relegated government action to a bare minimum of activities.

Unfortunately, reformers chafed at these restrictions on their ability to improve the lot of humanity. Their discontent coalesced around the idea that the bad effects of government action were a function of its form, not inherent to government itself. Price controls were developed by the Roman emperor Diocletian. Tariffs and quotas in international trade were the residue of the philosophy of mercantilism, followed by Spanish and French kings of the 16th and 17th centuries.

Surely dictatorship and monarchy were to blame for the backwardness of life under the ancien regime, not government per se. In contrast, prosperity and a large measure of peace had followed the advent of constitutional democracy in Europe and the U.S. If democracy could supplant authority in government, the good intentions and institutions of the democrats would overcome any inherent limitations of government and enable government to act more quickly, more surely and more comprehensively than private markets to undo the remaining evils of the world.

Alas, the 20th century taught us that professed good intentions are not nearly a prophylactic against the damage wrought by government unchained. Bismarck’s concessions to 19th-century socialism led to a German welfare state, which led to – Adolf Hitler, of all things. 20th-century liberals scoffed at F.A. Hayek’s warnings against economic planning as the precursor of totalitarianism, but the welfare state has inexorably reduced freedom and free markets as a glacier gradually engulfs all in its path.

The Collapse of the Government-Spending Machine

20th -century liberals in the Franklin Roosevelt administration envisioned a dynasty founded upon government spending. “Tax and tax, spend and spend, elect and elect” was their mantra. The formula has worked for nearly eighty years, not only in the U.S. but around the world.

Now the welfare state is foundering, largely on the issue of spending and its resulting debt. It is at least possible that if government spending were as efficient as private spending, we would tolerate the loss of freedom involved in exchange for the ostensible security provided by the welfare state. But government spending is so wildly inefficient and out of control that even if we were willing to sell our souls to Big Brother, none of us could afford the price tag. A tsunami of debt will drown the world monetary system and end the use of money for indirect exchange unless we make government our servant instead of our master.

Former British Prime Minister Margaret Thatcher once said that “the problem with socialism is that eventually you run out of other people’s money.” Although there are no theoretical limits on the ability of governments to create money, there are practical limits on our ability to absorb created money and government spending. Those limits are now in sight.

Most of the countries in the Eurozone have serious financial problems, either related to structural debt from overspending (Greece, Portugal, Belgium) or debt caused by bank bailouts (Spain, Italy, France, Great Britain, Ireland) or both. Only Germany and Switzerland are relatively problem-free, but they face the grim prospect of bailout out the rest. Banks in the U.S. are closely linked with European banks, particularly those in Great Britain. The need for spending reform is widely recognized, but the overspending has become so culturally entrenched that even a program of austerity, which is hardly thoroughgoing reform, raises the threat of riots and protests in the streets.

Only a few countries in the world have been prescient enough to recognize that the fool’s paradise is no longer inhabitable and must be depopulated via entitlement reform. Ironically, one of these is Sweden, which has passed its own version of Social Security privatization and has eschewed the Keynesian policies and monetary profligacy favored by American policymakers. Few would ever have predicted that Sweden and the U.S. would pass each other on the Road to Serfdom – going in opposite directions.