DRI-172 for week of 7-5-15: How and Why Did ObamaCare Become SCOTUSCare?

An Access Advertising EconBrief:

How and Why Did ObamaCare Become SCOTUSCare?

On June 25, 2015, the Supreme Court of the United States delivered its most consequential opinion in recent years in King v. Burwell. King was David King, one of various Plaintiffs opposing Sylvia Burwell, Secretary of Health, Education and Welfare. The case might more colloquially be called “ObamaCare II,” since it dealt with the second major attempt to overturn the Obama administration’s signature legislative achievement.

The Obama administration has been bragging about its success in attracting signups for the program. Not surprisingly, it fails to mention two facts that make this apparent victory Pyrrhic. First, most of the signups are people who lost their previous health insurance due to the law’s provisions, not people who lacked insurance to begin with. Second, a large chunk of enrollees are being subsidized by the federal government in the form of a tax credit for the amount of the insurance.

The point at issue in King v. Burwell is the legality of this subsidy. The original legislation provides for health-care exchanges established by state governments, and proponents have been quick to cite these provisions to pooh-pooh the contention that the Patient Protection and Affordable Care Act (PPACA) ushered in a federally-run, socialist system of health care. The specific language used by PPAACA in Section 1401 is that the IRS can provide tax credits for insurance purchased on “exchanges run by the State.” That phrase appears 14 times in Section 1401 and each time it clearly refers to state governments, not the federal government. But in actual practice, states have found it excruciatingly difficult to establish these exchanges and many states have refused to do so. Thus, people in those states have turned to the federal-government website for health insurance and have nevertheless received a tax credit under the IRS’s interpretation of statute 1401. That interpretation has come to light in various lawsuits heard by lower courts, some of which have ruled for plaintiffs and against attempts by the IRS and the Obama administration to award the tax credits.

Without the tax credits, many people on both sides of the political spectrum agree, PPACA will crash and burn. Not enough healthy people will sign up for the insurance to subsidize those with pre-existing medical conditions for whom PPACA is the only source of external funding for medical treatment.

To a figurative roll of drums, the Supreme Court of the United States (SCOTUS) released its opinion on June 25, 2015. It upheld the legality of the IRS interpretation in a 6-3 decision, finding for the government and the Obama administration for the second time. And for the second time, the opinion for the majority was written by Chief Justice John Roberts.

Roberts’ Rules of Constitutional Disorder

Given that Justice Roberts had previously written the opinion upholding the constitutionality of the law, his vote here cannot be considered a complete shock. As before, the shock was in the reasoning he used to reach his conclusion. In the first case (National Federation of Independent Businesses v. Sebelius, 2012), Roberts interpreted a key provision of the law in a way that its supporters had categorically and angrily rejected during the legislative debate prior to enactment and subsequently. He referred to the “individual mandate” that uninsured citizens must purchase health insurance as a tax. This rescued it from the otherwise untenable status of a coercive consumer directive – something not allowed under the Constitution.

Now Justice Roberts addressed the meaning of the phrase “established by the State.” He did not agree with one interpretation previously made by the government’s Solicitor General, that the term was an undefined term of art. He disdained to apply a precedent established by the Court in a previous case involving interpretation of law by administration agencies, the Chevron case. The precedent said that in cases where a phrase was ambiguous, a reasonable interpretation by the agency charged with administering the law would rule. In this case, though, Roberts claimed that since “the IRS…has no expertise in crafting health-insurance policy of this sort,” Congress could not possibly have intended to grant the agency this kind of discretion.

No, Roberts is prepared to believe that “established by the State” does not mean “established by the federal government,” all right. But he says that the Supreme Court cannot interpret the law this way because it will cause the law to fail to achieve its intended purpose. So, the Court must treat the wording as ambiguous and interpret it in such a way as to advance the goals intended by Congress and the administration. Hence, his decision for defendant and against plaintiffs.

In other words, he rejected the ability of the IRS to interpret the meaning of the phrase “established by the State” because of that agency’s lack of health-care-policy expertise, but is sufficiently confident of his own expertise in that area to interpret its meaning himself; it is his assessment of the market consequences that drives his decision to uphold the tax credits.

Roberts’ opinion prompted one of the most scathing, incredulous dissents in the history of the Court, by Justice Antonin Scalia. “This case requires us to decide whether someone who buys insurance on an exchange established by the Secretary gets tax credits,” begins Scalia. “You would think the answer would be obvious – so obvious that there would hardly be a need for the Supreme Court to hear a case about it… Under all the usual rules of interpretation… the government should lose this case. But normal rules of interpretation seem always to yield to the overriding principle of the present Court – the Affordable Care Act must be saved.”

The reader can sense Scalia’s mounting indignation and disbelief. “The Court interprets [Section 1401] to award tax credits on both federal and state exchanges. It accepts that the most natural sense of the phrase ‘an exchange established by the State’ is an exchange established by a state. (Understatement, thy name is an opinion on the Affordable Care Act!) Yet the opinion continues, with no semblance of shame, that ‘it is also possible that the phrase refers to all exchanges.’ (Impossible possibility, thy name is an opinion on the Affordable Care Act!)”

“Perhaps sensing the dismal failure of its efforts to show that ‘established by the State’ means ‘established by the State and the federal government,’ the Court tries to palm off the pertinent statutory phrase as ‘inartful drafting.’ The Court, however, has no free-floating power to rescue Congress from their drafting errors.” In other words, Justice Roberts has rewritten the law to suit himself.

To reinforce his conclusion, Scalia concludes with “…the Court forgets that ours is a government of laws and not of men. That means we are governed by the terms of our laws and not by the unenacted will of our lawmakers. If Congress enacted into law something different from what it intended, then it should amend to law to conform to its intent. In the meantime, Congress has no roving license …to disregard clear language on the view that … ‘Congress must have intended’ something broader.”

“Rather than rewriting the law under the pretense of interpreting it, the Court should have left it to Congress to decide what to do… [the] Court’s two cases on the law will be remembered through the years. And the cases will publish the discouraging truth that the Supreme Court favors some laws over others and is prepared to do whatever it takes to uphold and assist its favorites… We should start calling this law SCOTUSCare.”

Jonathan Adler of the much-respected and quoted law blog Volokh Conspiracy put it this way: “The umpire has decided that it’s okay to pinch-hit to ensure that the right team wins.”

And indeed, what most stands out about Roberts’ opinion is its contravention of ordinary constitutional thought. It is not the product of a mind that began at square one and worked its way methodically to a logical conclusion. The reader senses a reversal of procedure; the Chief Justice started out with a desired conclusion and worked backwards to figure out how to justify reaching it. Justice Scalia says as much in his dissent. But Scalia does not tell us why Roberts is behaving in this manner.

If we are honest with ourselves, we must admit that we do not know why Roberts is saying what he is saying. Beyond question, it is arbitrary and indefensible. Certainly it is inconsistent with his past decisions. There are various reasons why a man might do this.

One obvious motivation might be that Roberts is being blackmailed by political supporters of the PPACA, within or outside of the Obama administration. Since blackmail is not only a crime but also a distasteful allegation to make, nobody will advance it without concrete supporting evidence – not only evidence against the blackmailer but also an indication of his or her ammunition. The opposite side of the blackmail coin is bribery. Once again, nobody will allege this publicly without concrete evidence, such as letters, tapes, e-mails, bank account or bank-transfer information. These possibilities deserve mention because they lie at the head of a short list of motives for betrayal of deeply held principles.

Since nobody has come forward with evidence of malfeasance – or is likely to – suppose we disregard that category of possibility. What else could explain Roberts’ actions? (Note the plural; this is the second time he has sustained PPACA at the cost of his own integrity.)

Lord Acton Revisited

To explain John Roberts’ actions, we must develop a model of political economy. That requires a short side trip into the realm of political philosophy.

Lord Acton’s famous maxim is: “Power corrupts; absolute power corrupts absolutely.” We are used to thinking of it in the context of a dictatorship or of an individual or institution temporarily or unjustly wielding power. But it is highly applicable within the context of today’s welfare-state democracies.

All of the Western industrialized nations have evolved into what F. A. Hayek called “absolute democracies.” They are democratic because popular vote determines the composition of representative governments. But they are absolute in scope and degree because the administrative agencies staffing those governments are answerable to no voter. And increasingly the executive, legislative and judicial branches of the governments wield powers that are virtually unlimited. In practical effect, voters vote on which party will wield nominal executive control over the agencies and dominate the legislature. Instead of a single dictator, voters elect a government body with revolving and rotating dictatorial powers.

As the power of government has grown, the power at stake in elections has grown commensurately. This explains the burgeoning amounts of money spent on elections. It also explains the growing rancor between opposing parties, since ordinary citizens perceive the loss of electoral dominance to be subjugation akin to living under a dictatorship. But instead of viewing this phenomenon from the perspective of John Q. Public, view it from within the brain of a policymaker or decisionmaker.

For example, suppose you are a completely fictional Chairman of a completely hypothetical Federal Reserve Board. We will call you “Bernanke.” During a long period of absurdly low interest rates, a huge speculative boom has produced unprecedented levels of real-estate investment by banks and near-banks. After stoutly insisting for years on the benign nature of this activity, you suddenly perceive the likelihood that this speculative boom will go bust and some indeterminate number of these financial institutions will become insolvent. What do you do? 

Actually, the question is really more “What do you say?” The actions of the Federal Reserve in regulating banks, including those threatened with or undergoing insolvency, are theoretically set down on paper, not conjured up extemporaneously by the Fed Chairman every time a crisis looms. These days, though, the duties of a Fed Chairman involve verbal reassurance and massage as much as policy implementation. Placing those duties in their proper light requires that our side trip be interrupted with a historical flashback.

Let us cast our minds back to 1929 and the onset of the Great Depression in the United States. At that time, virtually nobody foresaw the coming of the Depression – nobody in authority, that is. For many decades afterwards, the conventional narrative was that President Herbert Hoover adopted a laissez faire economic policy, stubbornly waiting for the economy to recover rather than quickly ramping up government spending in response to the collapse of the private sector. Hoover’s name became synonymous with government passivity in the face of adversity. Makeshift shanties and villages of the homeless and dispossessed became known as “Hoovervilles.”

It took many years to dispel this myth. The first truthteller was economist Murray Rothbard in his 1962 book America’s Great Depression, who pointed out that Hoover had spent his entire term in a frenzy of activism. Far from remaining a pillar of fiscal rectitude, Hoover had presided over federal deficit spending so large that his successor, Democrat Franklin Delano Roosevelt, campaigned on a platform of balancing the federal-government budget. Hoover sternly warned corporate executives not to lower wages and officially adopted an official stance in favor of inflation.

Professional economists ignored Rothbard’s book in droves, as did reviewers throughout the mass media. Apparently the fact that Hoover’s policies failed to achieve their intended effects persuaded everybody that he couldn’t have actually followed the policies he did – since his actual policies were the very policies recommended by mainstream economists to counteract the effects of recession and Depression and were largely indistinguishable in kind, if not in degree, from those followed later by Roosevelt.

The anathematization of Herbert Hoover drover Hoover himself to distraction. The former President lived another thirty years, to age ninety, stoutly maintaining his innocence of the crime of insensitivity to the misery of the poor and unemployed. Prior to his presidency, Hoover had built reputation as one of the great humanitarians of the 20th century by deploying his engineering and organizational skills in the cause of disaster relief across the globe. The trashing of his reputation as President is one of history’s towering ironies. As it happened, his economic policies were disastrous, but not because he didn’t care about the people. His failure was ignorance of economics – the same sin committed by his critics.

Worse than the effects of his policies, though, was the effect his demonization has had on subsequent policymakers. We do not remember the name of the captain of the California, the ship that lay anchored within sight of the Titanic but failed to answer distress calls and go to the rescue. But the name of Hoover is still synonymous with inaction and defeat. In politics, the unforgivable sin became not to act in the face of any crisis, regardless of the consequences.

Today, unlike in Hoover’s day, the Chairman of the Federal Reserve Board is the quarterback of economic policy. This is so despite the Fed’s ambiguous status as a quasi-government body, owned by its member banks with a leader appointed by the President. Returning to our hypothetical, we ponder the dilemma faced by the Chairman, “Bernanke.”

Bernanke only directly controls monetary policy and bank regulation. But he receives information about every aspect of the U.S. economy in order to formulate Fed policy. The Fed also issues forecasts and recommendations for fiscal and regulatory policies. Even though the Federal Reserve is nominally independent of politics and from the Treasury department of the federal government, the Fed’s policies affect and are affected by government policies.

It might be tempting to assume that Fed Chairmen know what is going to happen in the economic future. But there is no reason to believe that is true. All we need do is examine their past statements to disabuse ourselves of that notion. Perhaps the popping of the speculative bubble that Bernanke now anticipates will produce an economic recession. Perhaps it will even topple the U.S. banking system like a row of dominoes and produce another Great Depression, a la 1929. But we cannot assume that either. The fact that we had one (1) Great Depression is no guarantee that we will have another one. After all, we have had 36 other recessions that did not turn into Great Depressions. There is nothing like a general consensus on what caused the Depression of the 1920s and 30s. (The reader is invited to peruse the many volumes written by historians, economic and non-, on the subject.) About the only point of agreement among commentators is that a large number of things went wrong more or less simultaneously and all of them contributed in varying degrees to the magnitude of the Depression.

Of course, a good case might be made that it doesn’t matter whether Fed Chairman can foresee a coming Great Depression or not. Until recently, one of the few things that united contemporary commentators was their conviction that another Great Depression was impossible. The safeguards put in place in response to the first one had foreclosed that possibility. First, “automatic stabilizers” would cause government spending to rise in response to any downturn in private-sector spending, thereby heading off any cumulative downward movement in investment and consumption in response to failures in the banking sector. Second, the Federal Reserve could and would act quickly in response to bank failures to prevent the resulting reverse-multiplier effect on the money supply, thereby heading off that threat at the pass. Third, bank regulations were modified and tightened to prevent failures from occurring or restrict them to isolated cases.

Yet despite everything written above, we can predict confidently that our fictional “Bernanke” would respond to a hypothetical crisis exactly as the real Ben Bernanke did respond to the crisis he faced and later described in the book he wrote about it. The actual and predicted responses are the same: Scare the daylights out of the public by predicting an imminent Depression of cataclysmic proportions and calling for massive government spending and regulation to counteract it. Of course, the real-life Bernanke claimed that he and Treasury Secretary Henry O’Neill correctly foresaw the economic future and were heroically calling for preventive measures before it was too late. But the logic we have carefully developed suggests otherwise.

Nobody – not Federal Reserve Chairmen or Treasury Secretaries or California psychics – can foresee Great Depressions. Predicting a recession is only possible if the cyclical process underlying it is correctly understood, and there is no generally accepted theory of the business cycle. No, Bernanke and O’Neill were not protecting America with their warning; they were protecting themselves. They didn’t know that a Great Depression was in the works – but they did know that they would be blamed for anything bad that did happen to the economy. Their only way of insuring against that outcome – of buying insurance against the loss of their jobs, their professional reputations and the possibility of historical “Hooverization” – was to scream for the biggest possible government action as soon as possible. 

Ben Bernanke had been blasé about the effects of ultra-low interest rates; he had pooh-poohed the possibility that the housing boom was a bubble that would burst like a sonic boom with reverberations that would flatten the economy. Suddenly he was confronted with a possibility that threatened to make him look like a fool. Was he icy cool, detached, above all personal considerations? Thinking only about banking regulations, national-income multipliers and the money supply? Or was he thinking the same thought that would occur to any normal human being in his place: “Oh, my God, my name will go down in history as the Herbert Hoover of Fed chairmen”?

Since the reasoning he claims as his inspiration is so obviously bogus, it is logical to classify his motives as personal rather than professional. He was protecting himself, not saving the country. And that brings us to the case of Chief Justice John Roberts.

Chief Justice John Roberts: Selfless, Self-Interested or Self-Preservationist?

For centuries, economists have identified self-interest as the driving force behind human behavior. This has exasperated and even angered outside observers, who have mistaken self-interest for greed or money-obsession. It is neither. Rather, it merely recognizes that the structure of the human mind gives each of us a comparative advantage in the promotion of our own welfare above that of others. Because I know more about me than you do, I can make myself happier than you can; because you know more about you than I do, you can make yourself happier than I can. And by cooperating to share our knowledge with each other, we can make each other happier through trade than we could be if we acted in isolation – but that cooperation must preserve the principle of self-interest in order to operate efficiently.

Strangely, economists long assumed that the same people who function well under the guidance of self-interest throw that principle to the winds when they take up the mantle of government. Government officials and representatives, according to traditional economics textbooks, become selfless instead of self-interested when they take office. Selflessness demands that they put the public welfare ahead of any personal considerations. And just what is the “public welfare,” exactly? Textbooks avoided grappling with this murky question by hiding behind notions like a “social welfare function” or a “community indifference curve.” These are examples of what the late F. A. Hayek called “the pretense of knowledge.”

Beginning in the 1950s, the “public choice” school of economics and political science was founded by James Buchanan and Gordon Tullock. This school of thought treated people in government just like people outside of government. It assumed that politicians, government bureaucrats and agency employees were trying to maximize their utility and operating under the principle of self-interest. Because the incentives they faced were radically different than those faced by those in the private sector, outcomes within government differed radically from those outside of government – usually for the worse.

If we apply this reasoning to members of the Supreme Court, we are confronted by a special kind of self-interest exercised by people in a unique position of power and authority. Members of the Court have climbed their career ladder to the top; in law, there are no higher rungs. This has special economic significance.

When economists speak of “competition” among input-suppliers, we normally speak of people competing with others doing the same job for promotion, raises and advancement. None of these are possible in this context. What about more elevated kinds of recognition? Well, there is certainly scope for that, but only for the best of the best. On the current court, positive recognition goes to those who write notable opinions. Only Judge Scalia has the special talent necessary to stand out as a legal scholar for the ages. In this sense, Judge Scalia is “competing” with other judges in a self-interested way when he writes his decisions, but he is not competing with his fellow judges. He is competing with the great judges of history – John Marshall, Oliver Wendell Holmes, Louis Brandeis, and Learned Hand – against whom his work is measured. Otherwise, a judge can stand out from the herd by providing the deciding or “swing” vote in close decisions. In other words, he can become politically popular or unpopular with groups that agree or disagree with his vote. Usually, that results in transitory notoriety.

But in historic cases, there is the possibility that it might lead to “Hooverization.”

The bigger government gets, the more power it wields. More government power leads to more disagreement about its role, which leads to more demand to arbitration by the Supreme Court. This puts the Court in the position of deciding the legality of enactments that claim to do great things for people while putting their freedoms and livelihoods in jeopardy. Any judge who casts a deciding vote against such a measure will go down in history as “the man who shot down” the Great Bailout/the Great Health Care/the Great Stimulus/the Great Reproductive Choice, ad infinitum.

Almost all Supreme Court justices have little to gain but a lot to lose from opposing a measure that promotes government power. They have little to gain because they cannot advance further or make more money and they do not compete with J. Marshall, Holmes, Brandeis or Hand. They have a lot to lose because they fear being anathematized by history, snubbed by colleagues, picketed or assassinated in the present day, and seeing their children brutalized by classmates or the news media. True, they might get satisfaction from adhering to the Constitution and their personal conception of justice – if they are sheltered under the umbrella of another justice’s opinion or they can fly under the radar of media scrutiny in a relatively low-profile case.

Let us attach a name to the status occupied by most Supreme Court justices and to the spirit that animates them. It is neither self-interest nor selflessness in their purest forms; we shall call it self-preservation. They want to preserve the exalted status they enjoy and they are not willing to risk it; they are willing to obey the Constitution, observe the law and speak the truth but only if and when they can preserve their position by doing so. When they are threatened, their principles and convictions suddenly go out the window and they will say and do whatever it takes to preserve what they perceive as their “self.” That “self” is the collection of real income, perks, immunities and prestige that go with the status of Supreme Court Justice.

Supreme Court Justice John Roberts is an example of the model of self-preservation. In both of the ObamaCare decisions, his opinions for the majority completely abdicated his previous conservative positions. They plumbed new depths of logical absurdity – legal absurdity in the first decision and semantic absurdity in the second one. Yet one day after the release of King v. Burwell, Justice Roberts dissented in the Obergefell case by chiding the majority for “converting personal preferences into constitutional law” and disregarding clear meaning of language in the laws being considered. In other words, he condemned precisely those sins he had himself committed the previous day in his majority opinion in King v. Burwell.

For decades, conservatives have watched in amazement, scratching their heads and wracking their brains as ostensibly conservative justices appointed by Republican presidents unexpectedly betrayed their principles when the chips were down, in high-profile cases. The economic model developed here lays out a systematic explanation for those previously inexplicable defections. David Souter, Anthony Kennedy, John Paul Stevens and Sandra Day O’Connor were the precursors to John Roberts. These were not random cases. They were the systematic workings of the self-preservationist principle in action.

DRI-135 for week of 1-4-15: Flexible Wages and Prices: Economic Shock Absorbers

An Access Advertising EconBrief:

Flexible Wages and Prices: Economic Shock Absorbers

At the same times that free markets are becoming an endangered species in our daily lives, they enjoy a lively literary existence. The latest stimulating exercise in free-market thought is The Forgotten Depression: 1921 – The Crash That Cured Itself. The author is James Grant, well-known in financial circles as editor/publisher of “Grant’s Interest Rate Observer.” For over thirty years, Grant has cast a skeptical eye on the monetary manipulations of governments and central banks. Now he casts his gimlet gaze backward on economic history. The result is electrifying.

The Recession/Depression of 1920-1921

The U.S. recession of 1920-1921 is familiar to students of business cycles and few others. It was a legacy of World War I. Back then, governments tended to finance wars through money creation. Invariably this led to inflation. In the U.S., the last days of the war and its immediate aftermath were boom times. As usual – when the boom was the artifact of money creation – the boom went bust.

Grant recounts the bust in harrowing detail.  In 1921, industrial production fell by 31.6%, a staggering datum when we recall that the U.S. was becoming the world’s leading manufacturer. (The President’s Conference on Unemployment reported in 1929 that 1921 was the only year after 1899 in which industrial production had declined.) Gross national product (today we would cite gross domestic product; neither statistic was actually calculated at that time) fell about 24% in between 1920 and 1921 in nominal dollars, or 9% when account is taken of price changes. (Grant compares this to the figures for the “Great Recession” of 2007-2009, which were 2.4% and 4.3%, respectively.) Corporate profits nosedived commensurately. Stocks plummeted; the Dow Jones Industrial average fell by 46.6% between the cyclical peak of November, 1919 and trough of August, 1921. According to Grant, “the U.S. suffered the steepest plunge in wholesale prices in its history (not even eclipsed by the Great Depression),” over 36% within 12 months. Unemployment rose dramatically to a level of some 4,270,000 in 1921 – and included even the President of General Motors, Billy Durant. (As the price of GM’s shares fell, he augmented his already-sizable shareholdings by buying on margin – ending up flat broke and out of a job.) Although the Department of Labor did not calculate an “unemployment rate” at that time, Grant estimates the nonfarm labor force at 27,989,000, which would have made the simplest measure of the unemployment rate 15.3%. (That is, it would have undoubtedly included labor-force dropouts and part-time workers who preferred full-time employment.)

A telling indicator of the dark mood enveloping the nation was passage of the Quota Act, the first step on the road to systematic federal limitation of foreign immigration into the U.S. The quota was fixed at 3% of foreign nationals present in each of the 48 states as of 1910. That year evidently reflected nostalgia for pre-war conditions since the then-popular agricultural agitation for farm-price “parity” sought to peg prices to levels at that same time.

In the Great Recession and accompanying financial panic of 2008 and subsequently, we had global warming and tsunamis in Japan and Indonesia to distract us. In 1920-1921, Prohibition had already shut down the legal liquor business, shuttering bars and nightclubs. A worldwide flu pandemic had killed hundreds of thousands. The Black Sox had thrown the 1919 World Series at the behest of gamblers.

The foregoing seems to make a strong prima facie case that the recession of 1920 turned into the depression of 1921. That was the judgment of the general public and contemporary commentators. Herbert Hoover, Secretary of Commerce under Republican President Warren G. Harding, who followed wartime President Woodrow Wilson in 1920, compiled many of the statistics Grant cites while chairman of the President’s Conference on Unemployment. He concurred with that judgment. So did the founder of the study of business cycles, the famous institutional economist Wesley C. Mitchell, who influenced colleagues as various and eminent as Thorstein Veblen, Milton Friedman, F. A. Hayek and John Kenneth Galbraith. Mitchell referred to “…the boom of 1919, the crisis of 1920 and the depression of 1921 [that] followed the patterns of earlier cycles.”

By today’s lights, the stage was set for a gigantic wave of federal-government intervention, a gargantuan stimulus program. Failing that, economists would have us believe, the economy would sink like a stone into a pit of economic depression from which it would likely never emerge.

What actually happened in 1921, however, was entirely different.

The Depression That Didn’t Materialize

We may well wonder what might have happened if the Democrats had retained control of the White House and Congress. Woodrow Wilson and his advisors (notably his personal secretary, Joseph Tumulty) had greatly advanced the project of big government begun by Progressive Republicans Theodore Roosevelt and William Howard Taft. During World War I, the Wilson administration seized control of the railroads, the telephone companies and the telegraph companies. It levied wage and price controls. The spirit of the Wilson administration’s efforts is best characterized by the statement of the Chief Price Controller of the War Industries Board, Robert Brookings. “I would rather pay a dollar a pound for [gun]powder for the United States in a state of war if there was no profit in it than pay the DuPont Company 50 cents a pound if they had 10 cents profit in it.” Of course, Mr. Brookings was not actually himself buying the gunpowder; the government was only representing the taxpayers (of whom Mr. Brookings was presumably one). And their attitude toward taxpayers was displayed by the administration’s transformation of an income tax initiated at insignificant levels in 1913 and to a marginal rate of 77% (!!) on incomes exceeding $1 million.

But Wilson’s obsession with the League of Nations and his 14 points for international governance had not only ruined his health, it had ruined his party’s standing with the electorate. In 1920, Republican Warren G. Harding was elected President. (The Republicans had already gained substantial Congressional majorities in the off-year elections of 1918.) Except for Hoover, the Harding circle of advisors was comprised largely of policy skeptics – people who felt there was nothing to be done in the face of an economic downturn but wait it out. After all, the U.S. had endured exactly this same phenomenon of economic boom, financial panic and economic bust before in 1812, 1818, 1825, 1837, 1847, 1857, 1873, 1884, 1890, 1893, 1903, 1907, 1910 and 1913. The U.S. economy had not remained mired in depression; it had emerged from all these recessions – or, in the case of 1873, a depression. If the 19th-century system of free markets were to be faulted, it would not be for failure to lift itself out of recession or depression, but for repeatedly re-entering the cycle of boom and bust.

There was no Federal Reserve to flood the economy with liquidity or peg interest rates at artificially low levels or institute a “zero interest-rate policy.” Indeed, the rules of the gold-standard “game” called for the Federal Reserve to raise interest rates to stem the inflation that still raged in the aftermath of World War I. Had it not done so, a gold outflow might theoretically have drained the U.S. dry.  The Fed did just that, and interest rates hovered around 8% for the duration. Deliberate deficit spending as an economic corrective would have been viewed as madness. As Grant put it, “laissez faire had its last hurrah in 1921.”

What was the result?

In the various individual industries, prices and wages and output fell like a stone. Auto production fell by 23%. General Motors, as previously noted, was particularly hard hit. It went from selling 52,000 vehicles per month to selling 13,000 to 6,150 in the space of seven months. Some $85 million in inventory was eventually written off in losses.

Hourly manufacturing wages fell by 22%. Average disposable income in agriculture, which comprised just under 20% of the economy, fell by over 55%. Bankruptcies overall tripled to nearly 20,000 over the two years ending in 1921. In Kansas City, MO, a haberdashery shop run by Harry Truman and Eddie Jacobson held out through 1920 before finally folding in 1921. The resulting personal bankruptcy and debt plagued the partners for years. Truman evaded it by taking a job as judge of the Jackson County Court, where his salary was secure against liens. But his bank accounts were periodically raided by bill collectors for years until 1935, when he was able to buy up the remaining debt at a devalued price.

In late 1920, Ford Motor Co. cut the price of its Model T by 25%. GM at first resisted price cuts but eventually followed suit. Farmers, who as individuals had no control over the price of their products, had little choice but to cut costs and increase productivity – increasing output was an individual’s only way to increase income. When all or most farmers succeeded, this produced lower prices. How much lower? Grant: “In the second half of [1920], the average price of 10 leading crops fell by 57 percent.” But how much more food can humans eat; how many more clothes can they wear? Since the price- and income-elasticities of demand for agricultural goods were less than one, this meant that agricultural revenue and incomes fell.

As noted by Wesley Mitchell, the U.S. slump was not unique but rather part of a global depression that began as a series of commodity-price crashes in Japan, the U.K., France, Italy, Germany, India, Canada, Sweden, the Netherlands and Australia. It encompassed commodities including pig iron, beef, hemlock, Portland cement, bricks, coal, crude oil and cotton.

Banks that had speculative commodity positions were caught short. Among these was the largest bank in the U.S., National City Bank, which had loaned extensively to finance the sugar industry in Cuba. Sugar prices were brought down in the commodity crash and brought the bank down with them. That is, the bank would have failed had it not received sweetheart loans from the Federal Reserve.

Today, the crash of prices would be called “deflation.” So it was called then and with much more precision. Today, deflation can mean anything from the kind of nosediving general price level seen in 1920-1921 to relatively stable prices to mild inflation – in short, any general level of prices that does not rise fast enough to suit a commentator.

But there was apparently general acknowledgment that deflation was occurring in the depression of 1921. Yet few people apart from economists found that ominous. And for good reason. Because after some 18 months of panic, recession and depression – the U.S. economy recovered. Just as it had done 14 times previously.

 

It didn’t merely recover. It roared back to life. President Harding died suddenly in 1923, but under President Coolidge the U.S. economy experienced the “Roaring 20s.” This was an economic boom fueled by low tax rates and high productivity, the likes of which would not be seen again until the 1980s. It was characterized by innovation and investment. Unfortunately, in the latter stages, the Federal Reserve forgot the lessons of 1921 and increases the money supply to “keep the price level stable” and prevent deflation in the face of the wave of innovation and productivity increases. This helped to usher in the Great Depression, along with numerous policy errors by the Hoover and Roosevelt administrations.

Economists like Keynes, Irving Fisher and Gustav Cassel were dumbfounded. They had expected deflation to flatten the U.S. economy like a pancake, increasing the real value of debts owed by debtor classes and discouraging consumers from spending in the expectation that prices would fall in the future. Not.

There was no economic stimulus. No TARP, no ZIRP, no QE. No wartime controls. No meddlesome regulation a la Theodore Roosevelt, Taft and Wilson. The Harding administration and the Fed left the economy alone to readjust and – mirabile dictu – it readjusted. In spite of the massive deflation or, much more likely, because of it.

The (Forgotten) Classical Theory of Flexible Wages and Prices

James Grant wants us to believe that this outcome was no accident. The book jacket for the Forgotten Depression bills it as “a free-market rejoinder to Bush’s and Obama’s Keynesian stimulus applied to the 2007-9 recession,” which “proposes ‘less is more’ with respect to federal intervention.”

His argument is almost entirely empirical and very heavily oriented to the 1920-1921 depression. That is deliberate; he cites the 14 previous cyclical contractions but focuses on this one for obvious reasons. It was the last time that free markets were given the opportunity to cure a depression; both Herbert Hoover and Franklin Roosevelt supervised heavy, continual interference with markets from 1929 through 1941. We have much better data on the 1920-21 episode than, say, the 1873 depression.

Readers may wonder, though, whether there is underlying logical support for the result achieved by the deflation of 1921. Can the chorus of economists advocating stimulative policy today really be wrong?

Prior to 1936, the policy chorus was even louder. Amazing as it now seems, it advocated the stance taken by Harding et al. Classical economists propounded the theory of flexible wages and prices as an antidote to recession and depression. And, without stating it in rigorous fashion, that is the theory that Grant is following in his book.

Using the language of modern macroeconomics, the problems posed by cyclical downturns are unemployment due to a sudden decline in aggregate (effective) demand for goods and services. The decline in aggregate demand causes declines in demand for all or most goods; the decline in demand for goods causes declines in demand for all or most types of labor. As a first approximation, this produces surpluses of goods and labor. The surplus of labor is defined as unemployment.

The classical economists pointed out that, while the shock of a decline in aggregate demand could cause temporary dislocations such as unsold goods and unemployment, this was not a permanent condition. Flexible wages and prices could, like the shock absorbers on an automobile, absorb the shock of the decline in aggregate demand and return the economy to stability.

Any surplus creates an incentive for sellers to lower price and buyers to increase purchases. As long as the surplus persists, the downward pressure on price will remain. And as the price (or wage) falls toward the new market-clearing point, the amount produced and sold (or the amount of labor offered and purchases) will increase once more.

Flexibility of wages and prices is really a two-part process. Part one works to clear the surpluses created by the initial decline in aggregate demand. In labor markets, this serves to preserve the incomes of workers who remain willing to work at the now-lower market wage. If they were unemployed, they would have no wage, but working at a lower wage gives them a lower nominal income than before. That is only part of this initial process, though. Prices in product markets are decreasing alongside the declining wages. In principle, fully flexible prices and wages would mean that even though the nominal incomes of workers would decline, their real incomes would be restored by the decline of all prices in equal proportion. If your wage falls by (say) 20%, declines in all prices by 20% should leave you able to purchase the same quantities of goods and services as before.

The emphasis on real magnitudes rather than nominal magnitudes gives rise to the name given to the second part of this process. It is called the real-balance effect. It was named by the classical economist A. C. Pigou and refined by later macroeconomist Don Patinkin.

When John Maynard Keynes wrote his General Theory of Employment Interest and Income in 1936, he attacked classical economists by attacking the concepts of flexible wages and prices. First, he attacked their feasibility. Then, he attacked their desirability.

Flexible wages were not observed in reality because workers would not consent to downward revisions in wages, Keynes maintained. Did Keynes really believe that workers preferred to be unemployed and earn zero wages at a relatively high market wage rather than work and earn a lower market wage? Well, he said that workers oriented their thinking toward the nominal wage rather than the real wage and thus did not perceive that they had regained their former position with lower prices and a lower wage. (This became known as the fallacy of money illusion.) His followers spent decades trying to explain what he really meant or revising his words or simply ignoring his actual words. (It should be noted, however, that Keynes was English and trade unions exerted vastly greater influence on prevailing wage levels in England that they did in the U.S. for at least the first three-quarters of the 20th century. This may well have biased Keynes’ thinking.)

Keynes also decried the assumption of flexible prices for various reasons, some of which continue to sway economists today. The upshot is that macroeconomics has lost touch with the principles of price flexibility. Even though Keynes’ criticisms of the classical economists and the price system were discredited in strict theory, they were accepted de facto by macroeconomists because it was felt that flexible wages and prices would take too long to work, while macroeconomic policy could be formulated and deployed relatively quickly. Why make people undergo the misery of unemployment and insolvency when we can relieve their anxiety quickly and compassionately by passing laws drafted by macroeconomists on the President’s Council of Economic Advisors?

Let’s Compare

Thanks to James Grant, we now have an empirical basis for comparison between policy regimes. In 1920-1921, the old-fashioned classical medicine of deflation, flexible wages and prices and the real-balance effect took 18 months to turn a panic, recession and depression into a rip-roaring recovery that lasted 8 years.

Fast forward to December, 2007. The recession has begun. Unfortunately, it is not detected until September, 2008, when the financial panic begins. The stimulus package is not passed until January, 2009 – barely in time for the official end of the recession in June, 2009. Whoops – unemployment is still around 10% and remains stubbornly high until 2013. Moreover, it only declines because Americans have left the labor force in numbers not seen for over thirty years. The recovery, such as it is, is so anemic as to hardly merit the name – and it is now over 7 years since the onset of recession in December, 2007.

 

It is no good complaining that the stimulus package was not large enough because we are comparing it with a case in which the authorities did nothing – or rather, did nothing stimulative, since their interest-rate increase should properly be termed contractionary. That is exactly what macroeconomists call it when referring to Federal Reserve policy in the 1930s, during the Great Depression, when they blame Fed policy and high interest rates for prolonging the Depression. Shouldn’t they instead be blaming the continual series of government interventions by the Fed and the federal government under Herbert Hoover and Franklin Roosevelt? And we didn’t even count the stimulus package introduced by the Bush administration, which came and went without making a ripple in term of economic effect.

Economists Are Lousy Accident Investigators 

For nearly a century, the economics profession has accused free markets of possessing faulty shock absorbers; namely, inflexible wages and prices. When it comes to economic history, economists are obviously lousy accident investigators. They have never developed a theory of business cycles but have instead assumed a decline in aggregate demand without asking why it occurred. In figurative terms, they have assumed the cause of the “accident” (the recession or the depression). Then they have made a further assumption that the failure of the “vehicle’s” (the economy’s) automatic guidance system to prevent (or mitigate) the accident was due to “faulty shock absorbers” (inflexible wages and prices).

Would an accident investigator fail to visit the scene of the accident? The economics profession has largely failed to investigate the flexibility of wages and prices even in the Great Depression, let alone the thirty-odd other economic contractions chronicled by the National Bureau of Economic Research. The work of researchers like Murray Rothbard, Vedder and Galloway, Benjamin Anderson and Harris Warren overturns the mainstream presumption of free-market failure.

The biggest empirical failure of all is one ignored by Grant; namely, the failure to demonstrate policy success. If macroeconomic policy worked as advertised, then we would not have recessions in the first place and could reliably end them once they began. In fact, we still have cyclical downturns and cannot use policy to end them and macroeconomists can point to no policy successes to bolster their case.

Now we have this case study by James Grant that provides meticulous proof that deflation – full-blooded, deep-throated, hell-for-leather deflation in no uncertain terms – put a prompt, efficacious end to what must be called an economic depression.

Combine this with the 40-year-long research project conducted on Keynesian theory, culminating in its final discrediting by the early 1980s. Throw in the existence of the Austrian Business Cycle Theory, which combines the monetary theory of Ludwig von Mises and interest-rate theory of Knut Wicksell with the dynamic synthesis developed by F. A. Hayek. This theory cannot be called complete because it lacks a fully worked out capital theory to complete the integration of monetary and value theory. (We might think of this as the economic version of the Unified Field Theory in the natural sciences.) But an incomplete valid theory beats a discredited theory every time.

In other words, free-market economics has an explanation for why the accident repeatedly happens and why its effects can be mitigated by the economy’s automatic guidance mechanism without the need for policy action by government. It also explains why the policy actions are ineffective at both remedial and preventive action in the field of accidents.

James Grant’s book will take its place in the pantheon of economic history as the outstanding case study to date of a self-curing depression.