DRI-172 for week of 7-5-15: How and Why Did ObamaCare Become SCOTUSCare?

An Access Advertising EconBrief:

How and Why Did ObamaCare Become SCOTUSCare?

On June 25, 2015, the Supreme Court of the United States delivered its most consequential opinion in recent years in King v. Burwell. King was David King, one of various Plaintiffs opposing Sylvia Burwell, Secretary of Health, Education and Welfare. The case might more colloquially be called “ObamaCare II,” since it dealt with the second major attempt to overturn the Obama administration’s signature legislative achievement.

The Obama administration has been bragging about its success in attracting signups for the program. Not surprisingly, it fails to mention two facts that make this apparent victory Pyrrhic. First, most of the signups are people who lost their previous health insurance due to the law’s provisions, not people who lacked insurance to begin with. Second, a large chunk of enrollees are being subsidized by the federal government in the form of a tax credit for the amount of the insurance.

The point at issue in King v. Burwell is the legality of this subsidy. The original legislation provides for health-care exchanges established by state governments, and proponents have been quick to cite these provisions to pooh-pooh the contention that the Patient Protection and Affordable Care Act (PPACA) ushered in a federally-run, socialist system of health care. The specific language used by PPAACA in Section 1401 is that the IRS can provide tax credits for insurance purchased on “exchanges run by the State.” That phrase appears 14 times in Section 1401 and each time it clearly refers to state governments, not the federal government. But in actual practice, states have found it excruciatingly difficult to establish these exchanges and many states have refused to do so. Thus, people in those states have turned to the federal-government website for health insurance and have nevertheless received a tax credit under the IRS’s interpretation of statute 1401. That interpretation has come to light in various lawsuits heard by lower courts, some of which have ruled for plaintiffs and against attempts by the IRS and the Obama administration to award the tax credits.

Without the tax credits, many people on both sides of the political spectrum agree, PPACA will crash and burn. Not enough healthy people will sign up for the insurance to subsidize those with pre-existing medical conditions for whom PPACA is the only source of external funding for medical treatment.

To a figurative roll of drums, the Supreme Court of the United States (SCOTUS) released its opinion on June 25, 2015. It upheld the legality of the IRS interpretation in a 6-3 decision, finding for the government and the Obama administration for the second time. And for the second time, the opinion for the majority was written by Chief Justice John Roberts.

Roberts’ Rules of Constitutional Disorder

Given that Justice Roberts had previously written the opinion upholding the constitutionality of the law, his vote here cannot be considered a complete shock. As before, the shock was in the reasoning he used to reach his conclusion. In the first case (National Federation of Independent Businesses v. Sebelius, 2012), Roberts interpreted a key provision of the law in a way that its supporters had categorically and angrily rejected during the legislative debate prior to enactment and subsequently. He referred to the “individual mandate” that uninsured citizens must purchase health insurance as a tax. This rescued it from the otherwise untenable status of a coercive consumer directive – something not allowed under the Constitution.

Now Justice Roberts addressed the meaning of the phrase “established by the State.” He did not agree with one interpretation previously made by the government’s Solicitor General, that the term was an undefined term of art. He disdained to apply a precedent established by the Court in a previous case involving interpretation of law by administration agencies, the Chevron case. The precedent said that in cases where a phrase was ambiguous, a reasonable interpretation by the agency charged with administering the law would rule. In this case, though, Roberts claimed that since “the IRS…has no expertise in crafting health-insurance policy of this sort,” Congress could not possibly have intended to grant the agency this kind of discretion.

No, Roberts is prepared to believe that “established by the State” does not mean “established by the federal government,” all right. But he says that the Supreme Court cannot interpret the law this way because it will cause the law to fail to achieve its intended purpose. So, the Court must treat the wording as ambiguous and interpret it in such a way as to advance the goals intended by Congress and the administration. Hence, his decision for defendant and against plaintiffs.

In other words, he rejected the ability of the IRS to interpret the meaning of the phrase “established by the State” because of that agency’s lack of health-care-policy expertise, but is sufficiently confident of his own expertise in that area to interpret its meaning himself; it is his assessment of the market consequences that drives his decision to uphold the tax credits.

Roberts’ opinion prompted one of the most scathing, incredulous dissents in the history of the Court, by Justice Antonin Scalia. “This case requires us to decide whether someone who buys insurance on an exchange established by the Secretary gets tax credits,” begins Scalia. “You would think the answer would be obvious – so obvious that there would hardly be a need for the Supreme Court to hear a case about it… Under all the usual rules of interpretation… the government should lose this case. But normal rules of interpretation seem always to yield to the overriding principle of the present Court – the Affordable Care Act must be saved.”

The reader can sense Scalia’s mounting indignation and disbelief. “The Court interprets [Section 1401] to award tax credits on both federal and state exchanges. It accepts that the most natural sense of the phrase ‘an exchange established by the State’ is an exchange established by a state. (Understatement, thy name is an opinion on the Affordable Care Act!) Yet the opinion continues, with no semblance of shame, that ‘it is also possible that the phrase refers to all exchanges.’ (Impossible possibility, thy name is an opinion on the Affordable Care Act!)”

“Perhaps sensing the dismal failure of its efforts to show that ‘established by the State’ means ‘established by the State and the federal government,’ the Court tries to palm off the pertinent statutory phrase as ‘inartful drafting.’ The Court, however, has no free-floating power to rescue Congress from their drafting errors.” In other words, Justice Roberts has rewritten the law to suit himself.

To reinforce his conclusion, Scalia concludes with “…the Court forgets that ours is a government of laws and not of men. That means we are governed by the terms of our laws and not by the unenacted will of our lawmakers. If Congress enacted into law something different from what it intended, then it should amend to law to conform to its intent. In the meantime, Congress has no roving license …to disregard clear language on the view that … ‘Congress must have intended’ something broader.”

“Rather than rewriting the law under the pretense of interpreting it, the Court should have left it to Congress to decide what to do… [the] Court’s two cases on the law will be remembered through the years. And the cases will publish the discouraging truth that the Supreme Court favors some laws over others and is prepared to do whatever it takes to uphold and assist its favorites… We should start calling this law SCOTUSCare.”

Jonathan Adler of the much-respected and quoted law blog Volokh Conspiracy put it this way: “The umpire has decided that it’s okay to pinch-hit to ensure that the right team wins.”

And indeed, what most stands out about Roberts’ opinion is its contravention of ordinary constitutional thought. It is not the product of a mind that began at square one and worked its way methodically to a logical conclusion. The reader senses a reversal of procedure; the Chief Justice started out with a desired conclusion and worked backwards to figure out how to justify reaching it. Justice Scalia says as much in his dissent. But Scalia does not tell us why Roberts is behaving in this manner.

If we are honest with ourselves, we must admit that we do not know why Roberts is saying what he is saying. Beyond question, it is arbitrary and indefensible. Certainly it is inconsistent with his past decisions. There are various reasons why a man might do this.

One obvious motivation might be that Roberts is being blackmailed by political supporters of the PPACA, within or outside of the Obama administration. Since blackmail is not only a crime but also a distasteful allegation to make, nobody will advance it without concrete supporting evidence – not only evidence against the blackmailer but also an indication of his or her ammunition. The opposite side of the blackmail coin is bribery. Once again, nobody will allege this publicly without concrete evidence, such as letters, tapes, e-mails, bank account or bank-transfer information. These possibilities deserve mention because they lie at the head of a short list of motives for betrayal of deeply held principles.

Since nobody has come forward with evidence of malfeasance – or is likely to – suppose we disregard that category of possibility. What else could explain Roberts’ actions? (Note the plural; this is the second time he has sustained PPACA at the cost of his own integrity.)

Lord Acton Revisited

To explain John Roberts’ actions, we must develop a model of political economy. That requires a short side trip into the realm of political philosophy.

Lord Acton’s famous maxim is: “Power corrupts; absolute power corrupts absolutely.” We are used to thinking of it in the context of a dictatorship or of an individual or institution temporarily or unjustly wielding power. But it is highly applicable within the context of today’s welfare-state democracies.

All of the Western industrialized nations have evolved into what F. A. Hayek called “absolute democracies.” They are democratic because popular vote determines the composition of representative governments. But they are absolute in scope and degree because the administrative agencies staffing those governments are answerable to no voter. And increasingly the executive, legislative and judicial branches of the governments wield powers that are virtually unlimited. In practical effect, voters vote on which party will wield nominal executive control over the agencies and dominate the legislature. Instead of a single dictator, voters elect a government body with revolving and rotating dictatorial powers.

As the power of government has grown, the power at stake in elections has grown commensurately. This explains the burgeoning amounts of money spent on elections. It also explains the growing rancor between opposing parties, since ordinary citizens perceive the loss of electoral dominance to be subjugation akin to living under a dictatorship. But instead of viewing this phenomenon from the perspective of John Q. Public, view it from within the brain of a policymaker or decisionmaker.

For example, suppose you are a completely fictional Chairman of a completely hypothetical Federal Reserve Board. We will call you “Bernanke.” During a long period of absurdly low interest rates, a huge speculative boom has produced unprecedented levels of real-estate investment by banks and near-banks. After stoutly insisting for years on the benign nature of this activity, you suddenly perceive the likelihood that this speculative boom will go bust and some indeterminate number of these financial institutions will become insolvent. What do you do? 

Actually, the question is really more “What do you say?” The actions of the Federal Reserve in regulating banks, including those threatened with or undergoing insolvency, are theoretically set down on paper, not conjured up extemporaneously by the Fed Chairman every time a crisis looms. These days, though, the duties of a Fed Chairman involve verbal reassurance and massage as much as policy implementation. Placing those duties in their proper light requires that our side trip be interrupted with a historical flashback.

Let us cast our minds back to 1929 and the onset of the Great Depression in the United States. At that time, virtually nobody foresaw the coming of the Depression – nobody in authority, that is. For many decades afterwards, the conventional narrative was that President Herbert Hoover adopted a laissez faire economic policy, stubbornly waiting for the economy to recover rather than quickly ramping up government spending in response to the collapse of the private sector. Hoover’s name became synonymous with government passivity in the face of adversity. Makeshift shanties and villages of the homeless and dispossessed became known as “Hoovervilles.”

It took many years to dispel this myth. The first truthteller was economist Murray Rothbard in his 1962 book America’s Great Depression, who pointed out that Hoover had spent his entire term in a frenzy of activism. Far from remaining a pillar of fiscal rectitude, Hoover had presided over federal deficit spending so large that his successor, Democrat Franklin Delano Roosevelt, campaigned on a platform of balancing the federal-government budget. Hoover sternly warned corporate executives not to lower wages and officially adopted an official stance in favor of inflation.

Professional economists ignored Rothbard’s book in droves, as did reviewers throughout the mass media. Apparently the fact that Hoover’s policies failed to achieve their intended effects persuaded everybody that he couldn’t have actually followed the policies he did – since his actual policies were the very policies recommended by mainstream economists to counteract the effects of recession and Depression and were largely indistinguishable in kind, if not in degree, from those followed later by Roosevelt.

The anathematization of Herbert Hoover drover Hoover himself to distraction. The former President lived another thirty years, to age ninety, stoutly maintaining his innocence of the crime of insensitivity to the misery of the poor and unemployed. Prior to his presidency, Hoover had built reputation as one of the great humanitarians of the 20th century by deploying his engineering and organizational skills in the cause of disaster relief across the globe. The trashing of his reputation as President is one of history’s towering ironies. As it happened, his economic policies were disastrous, but not because he didn’t care about the people. His failure was ignorance of economics – the same sin committed by his critics.

Worse than the effects of his policies, though, was the effect his demonization has had on subsequent policymakers. We do not remember the name of the captain of the California, the ship that lay anchored within sight of the Titanic but failed to answer distress calls and go to the rescue. But the name of Hoover is still synonymous with inaction and defeat. In politics, the unforgivable sin became not to act in the face of any crisis, regardless of the consequences.

Today, unlike in Hoover’s day, the Chairman of the Federal Reserve Board is the quarterback of economic policy. This is so despite the Fed’s ambiguous status as a quasi-government body, owned by its member banks with a leader appointed by the President. Returning to our hypothetical, we ponder the dilemma faced by the Chairman, “Bernanke.”

Bernanke only directly controls monetary policy and bank regulation. But he receives information about every aspect of the U.S. economy in order to formulate Fed policy. The Fed also issues forecasts and recommendations for fiscal and regulatory policies. Even though the Federal Reserve is nominally independent of politics and from the Treasury department of the federal government, the Fed’s policies affect and are affected by government policies.

It might be tempting to assume that Fed Chairmen know what is going to happen in the economic future. But there is no reason to believe that is true. All we need do is examine their past statements to disabuse ourselves of that notion. Perhaps the popping of the speculative bubble that Bernanke now anticipates will produce an economic recession. Perhaps it will even topple the U.S. banking system like a row of dominoes and produce another Great Depression, a la 1929. But we cannot assume that either. The fact that we had one (1) Great Depression is no guarantee that we will have another one. After all, we have had 36 other recessions that did not turn into Great Depressions. There is nothing like a general consensus on what caused the Depression of the 1920s and 30s. (The reader is invited to peruse the many volumes written by historians, economic and non-, on the subject.) About the only point of agreement among commentators is that a large number of things went wrong more or less simultaneously and all of them contributed in varying degrees to the magnitude of the Depression.

Of course, a good case might be made that it doesn’t matter whether Fed Chairman can foresee a coming Great Depression or not. Until recently, one of the few things that united contemporary commentators was their conviction that another Great Depression was impossible. The safeguards put in place in response to the first one had foreclosed that possibility. First, “automatic stabilizers” would cause government spending to rise in response to any downturn in private-sector spending, thereby heading off any cumulative downward movement in investment and consumption in response to failures in the banking sector. Second, the Federal Reserve could and would act quickly in response to bank failures to prevent the resulting reverse-multiplier effect on the money supply, thereby heading off that threat at the pass. Third, bank regulations were modified and tightened to prevent failures from occurring or restrict them to isolated cases.

Yet despite everything written above, we can predict confidently that our fictional “Bernanke” would respond to a hypothetical crisis exactly as the real Ben Bernanke did respond to the crisis he faced and later described in the book he wrote about it. The actual and predicted responses are the same: Scare the daylights out of the public by predicting an imminent Depression of cataclysmic proportions and calling for massive government spending and regulation to counteract it. Of course, the real-life Bernanke claimed that he and Treasury Secretary Henry O’Neill correctly foresaw the economic future and were heroically calling for preventive measures before it was too late. But the logic we have carefully developed suggests otherwise.

Nobody – not Federal Reserve Chairmen or Treasury Secretaries or California psychics – can foresee Great Depressions. Predicting a recession is only possible if the cyclical process underlying it is correctly understood, and there is no generally accepted theory of the business cycle. No, Bernanke and O’Neill were not protecting America with their warning; they were protecting themselves. They didn’t know that a Great Depression was in the works – but they did know that they would be blamed for anything bad that did happen to the economy. Their only way of insuring against that outcome – of buying insurance against the loss of their jobs, their professional reputations and the possibility of historical “Hooverization” – was to scream for the biggest possible government action as soon as possible. 

Ben Bernanke had been blasé about the effects of ultra-low interest rates; he had pooh-poohed the possibility that the housing boom was a bubble that would burst like a sonic boom with reverberations that would flatten the economy. Suddenly he was confronted with a possibility that threatened to make him look like a fool. Was he icy cool, detached, above all personal considerations? Thinking only about banking regulations, national-income multipliers and the money supply? Or was he thinking the same thought that would occur to any normal human being in his place: “Oh, my God, my name will go down in history as the Herbert Hoover of Fed chairmen”?

Since the reasoning he claims as his inspiration is so obviously bogus, it is logical to classify his motives as personal rather than professional. He was protecting himself, not saving the country. And that brings us to the case of Chief Justice John Roberts.

Chief Justice John Roberts: Selfless, Self-Interested or Self-Preservationist?

For centuries, economists have identified self-interest as the driving force behind human behavior. This has exasperated and even angered outside observers, who have mistaken self-interest for greed or money-obsession. It is neither. Rather, it merely recognizes that the structure of the human mind gives each of us a comparative advantage in the promotion of our own welfare above that of others. Because I know more about me than you do, I can make myself happier than you can; because you know more about you than I do, you can make yourself happier than I can. And by cooperating to share our knowledge with each other, we can make each other happier through trade than we could be if we acted in isolation – but that cooperation must preserve the principle of self-interest in order to operate efficiently.

Strangely, economists long assumed that the same people who function well under the guidance of self-interest throw that principle to the winds when they take up the mantle of government. Government officials and representatives, according to traditional economics textbooks, become selfless instead of self-interested when they take office. Selflessness demands that they put the public welfare ahead of any personal considerations. And just what is the “public welfare,” exactly? Textbooks avoided grappling with this murky question by hiding behind notions like a “social welfare function” or a “community indifference curve.” These are examples of what the late F. A. Hayek called “the pretense of knowledge.”

Beginning in the 1950s, the “public choice” school of economics and political science was founded by James Buchanan and Gordon Tullock. This school of thought treated people in government just like people outside of government. It assumed that politicians, government bureaucrats and agency employees were trying to maximize their utility and operating under the principle of self-interest. Because the incentives they faced were radically different than those faced by those in the private sector, outcomes within government differed radically from those outside of government – usually for the worse.

If we apply this reasoning to members of the Supreme Court, we are confronted by a special kind of self-interest exercised by people in a unique position of power and authority. Members of the Court have climbed their career ladder to the top; in law, there are no higher rungs. This has special economic significance.

When economists speak of “competition” among input-suppliers, we normally speak of people competing with others doing the same job for promotion, raises and advancement. None of these are possible in this context. What about more elevated kinds of recognition? Well, there is certainly scope for that, but only for the best of the best. On the current court, positive recognition goes to those who write notable opinions. Only Judge Scalia has the special talent necessary to stand out as a legal scholar for the ages. In this sense, Judge Scalia is “competing” with other judges in a self-interested way when he writes his decisions, but he is not competing with his fellow judges. He is competing with the great judges of history – John Marshall, Oliver Wendell Holmes, Louis Brandeis, and Learned Hand – against whom his work is measured. Otherwise, a judge can stand out from the herd by providing the deciding or “swing” vote in close decisions. In other words, he can become politically popular or unpopular with groups that agree or disagree with his vote. Usually, that results in transitory notoriety.

But in historic cases, there is the possibility that it might lead to “Hooverization.”

The bigger government gets, the more power it wields. More government power leads to more disagreement about its role, which leads to more demand to arbitration by the Supreme Court. This puts the Court in the position of deciding the legality of enactments that claim to do great things for people while putting their freedoms and livelihoods in jeopardy. Any judge who casts a deciding vote against such a measure will go down in history as “the man who shot down” the Great Bailout/the Great Health Care/the Great Stimulus/the Great Reproductive Choice, ad infinitum.

Almost all Supreme Court justices have little to gain but a lot to lose from opposing a measure that promotes government power. They have little to gain because they cannot advance further or make more money and they do not compete with J. Marshall, Holmes, Brandeis or Hand. They have a lot to lose because they fear being anathematized by history, snubbed by colleagues, picketed or assassinated in the present day, and seeing their children brutalized by classmates or the news media. True, they might get satisfaction from adhering to the Constitution and their personal conception of justice – if they are sheltered under the umbrella of another justice’s opinion or they can fly under the radar of media scrutiny in a relatively low-profile case.

Let us attach a name to the status occupied by most Supreme Court justices and to the spirit that animates them. It is neither self-interest nor selflessness in their purest forms; we shall call it self-preservation. They want to preserve the exalted status they enjoy and they are not willing to risk it; they are willing to obey the Constitution, observe the law and speak the truth but only if and when they can preserve their position by doing so. When they are threatened, their principles and convictions suddenly go out the window and they will say and do whatever it takes to preserve what they perceive as their “self.” That “self” is the collection of real income, perks, immunities and prestige that go with the status of Supreme Court Justice.

Supreme Court Justice John Roberts is an example of the model of self-preservation. In both of the ObamaCare decisions, his opinions for the majority completely abdicated his previous conservative positions. They plumbed new depths of logical absurdity – legal absurdity in the first decision and semantic absurdity in the second one. Yet one day after the release of King v. Burwell, Justice Roberts dissented in the Obergefell case by chiding the majority for “converting personal preferences into constitutional law” and disregarding clear meaning of language in the laws being considered. In other words, he condemned precisely those sins he had himself committed the previous day in his majority opinion in King v. Burwell.

For decades, conservatives have watched in amazement, scratching their heads and wracking their brains as ostensibly conservative justices appointed by Republican presidents unexpectedly betrayed their principles when the chips were down, in high-profile cases. The economic model developed here lays out a systematic explanation for those previously inexplicable defections. David Souter, Anthony Kennedy, John Paul Stevens and Sandra Day O’Connor were the precursors to John Roberts. These were not random cases. They were the systematic workings of the self-preservationist principle in action.

DRI-219 for week of 11-23-14: A Columnist’s Dawning Recognition of Deadly Auto-Safety Regulation

An Access Advertising EconBrief:

A Columnist’s Dawning Recognition of Deadly Auto-Safety Regulation

We are familiar with investigative reports by reporters in print and broadcast media and, in recent years, online. We view these as the mechanism for regulating institutions not subject to the constraints of the marketplace. Government is chief among these.

This routine has accustomed us to casting the news media in the role of cynical watchdog, always looking for wrongdoing and too prone to suspect the motives of those it covers. Of course, we may suspect the press of pre-existing bias – in favor of Democrats, for instance. But for the most part, we believe that their interests are served by finding scandal, wrongdoing and malfeasance, because these things are news.

The possibility that the press itself may be naïve and complacent is the last one we consider. It should not be overlooked.

Air Bag Safety 

Wall Street Journal columnist Holman Jenkins has written a series of columns about auto safety and regulation. Many of them followed the regulatory travails of Toyota, which endured a prolonged crucifixion when its vehicles were ostensibly subject to a problem of “unintended acceleration.” Although it was all too clear that the problem was caused by drivers unwittingly depressing the accelerator instead of the brake pedal, the company was beset by the fable that a bug in the car’s computer code was causing cars to accelerate when they should be slowing. Despite the conspicuous lack of scientific evidence for this hypothesis – not surprising in view of its impossibility – Toyota eventually was forced to pay out hundreds of millions of dollars in settlement money to make the issue go away.

Having set a tone of skepticism toward regulators, Jenkins turned next to the recent disclosure by Takata that their air bags have displayed defects. Toyota and Honda have recalled over 8 million vehicles to replace the air bags. The defect (apparently caused by moisture entering the ammonium nitrate air filter of the air bag) breaks down the explosive tablets in the air bags, causing them to burn quicker and explode more violently than normal. In turn, this shreds the metal housing surrounding the tablets and sends a shower of shrapnel into the driver and front-seat passenger (if any).

Jenkins noted that the demand by federal automobile regulators that the companies recall millions more vehicles is suspiciously timed to coincide with the end of hearings on the response by Japanese automakers to the finding of defective air bags. He reserved his strongest note of skepticism, though, for the use of air bags as safety devices.

“The faulty Takata air bags are connected to five deaths in 13 years, which is a tiny fraction of the deaths known to be caused by air bags working as designed [emphasis added]. When the Takata mess is cleaned up, we’ll still be left with a highly problematic safety technology.”

What’s this? Air bags themselves cause automobile-occupant deaths? They’re supposed to prevent deaths, not cause them. This is surely news to the general public, which is why Jenkins continues with a brief chronology of air bags’ journey from industrial infancy to ubiquity. “Washington began pushing automakers to install air bags in the 1980s, and ever since Washington has been responsible for research that confirms that air bags save hundreds of lives a year. These studies, though, credit air bags with saving people who were also wearing seat belts, when considerable evidence indicates seat belts alone do the job.”

“These studies also assume that deaths in collisions where air bags deployed are always attributable to the collision, never [to] the air bag.”

Jenkins does not mention that the push for air bags coincides with a federal push for mandatory wearing of seat belts. The first state law that required the wearing of a seat belt meeting federal specifications – essentially, a three-point seat belt buckling over the lap but also including a shoulder restraint – was passed in 1983. States received seven-figure federal bounties for passing a mandatory seat-belt law and achieving an estimated compliance beyond a specified rate.

“A 2005 study by Mary C. Mayer and Tremika Finney published by the American Statistical Association tried to correct for these errors and found that the clearest effect of air bags was an increased risk of death for unbelted occupants in low-speed crashes. Likewise, a 2002 study of 51,000 fatal accidents by University of Washington epidemiologists found that air bags (unlike seat belts) contributed little to crash survivability.”

“[Thus] air bags began as simple bombs buried in the dashboard designed to protect the typical non-seat belt wearing accident victim – the typical unbelted victim being a 170-pound teenage male. In 1997 came the reckoning: Air bags designed to meet the government’s criteria were shown to be responsible for the deaths of dozens of children and small adults in otherwise survivable accidents.”

This is the key point in Jenkins’ chronology, the point at which the reader’s eyebrows shoot up and he shakes his head in disbelief. Dozens of deaths? Adults and children? How did I miss the public furor over this? After all, when one or two people die owing to an automobile defect that a company knew about or should have known about, all hell breaks loose. In fact, the history of government suppression of unfavorable air-bag performance goes back decades. But Jenkins makes no mention of this; instead, he moves on.

“Since then, air bags have become ‘smarter,’ with computers modulating their deployment depending on type of crash, passenger characteristics and whether seat belts are being worn. Undoubtedly the technology has improved but still debatable is whether the benefits outweigh the risks and costs. Air bags remain one of the biggest reasons for vehicle recalls – and no wonder, given that these devices, which are dangerous to those who manufacture them and to those who repair vehicles, are expected to go years without maintenance or testing and then work perfectly.”

“Because, in the minds of the public, not to mention in the slow-motion videos on the evening news, air bags are seen as gentle, billowy clouds of perfect safety, yet another problem is the potential encouragement they give motorists to drive more aggressively or forgo the hassle of buckling up.” Now Jenkins has driven himself and his readers into water over their heads and is stalled. His column will drown unless it is rescued promptly. He has made a strong case that air bags are inherently unsafe, but is now suggesting something else – a different source of harm from their use. The fatal stretch of water was entered with the words “…yet another problem is the potential encouragement they [air bags] give motorists to drive more aggressively or forgo the hassle of buckling up.” This requires the services of a professional economist.

The Economic Principle of “Risk Compensation”

 

People tend to increase their indulgence in risky activities when they perceive that the safety of those activities has been enhanced. Risk should be treated just like any other consumption good – when the price of risk goes down, we should expect people to purchase more of it.

The first of the previous two sentences would meet with the approval of most people. The second would not. Yet from the economist’s perspective they might be interpreted as saying the same thing. A space shuttle is currently being readied for commercial use by tourists; wouldn’t we expect tourists to be more enthusiastic about it when improvements in launch and flight safety reduce the risk of death and serious injury for passengers? Still, we would expect there to be an appreciable risk of space travel for the indefinite future, wouldn’t we? When improvements in contraception result in better prophylactics, don’t we expect people to have sex more often, despite the fact that they still run a risk of contracting a sexually transmitted disease?

In 1975, economist Sam Peltzman published a seminal article in the Journal of Political Economy. He analyzed the effects of a series of government-mandated safety devices introduced beginning in the mid-1960s. His analysis suggested that the net effect on safety was approximately zero. Peltzman offered two explanations for this surprising result, the most plausible of which was that requiring the use of seat belts by drivers causes some people to take more driving risk than they would have if they had been driving beltless. This additional driving risk produced more accidents. While the increased use of safety devices tended to produce fewer injuries and fatalities among automobile occupants, the increased number of accidents also implicated non-automobile occupants such as pedestrians, motorcyclists and bicyclists. These additional injuries and fatalities tended to offset the injuries and fatalities saved by the use of seat belts, so that the comparative end result in driving statistics such as “fatalities per million miles driven” was a wash.

Over the succeeding forty years, this kind of outcome became proverbial throughout the social sciences, not just economics. In 2006, Smithsonian Magazine published an article summarizing the powerful effect that Peltzman’s work has had on the world. His ideas are grouped under the heading of “risk compensation,” an evocative term that implies that we satisfy our appetite for risk by compensating for added safety by “purchasing” more risk.  The principle has been observed in nations around the world, among adults and children, in activities ranging from driving to playground behavior to sports. Famous economist N. Gregory Mankiw, former Chairman of the President’s Council of Economic Advisors under President George W. Bush, blogged about “Sam Peltzman, who taught us all that mandatory seat-belt laws cause drivers to drive more recklessly.” Mankiw dubbed the relevant principle the “Peltzman Effect,” making Peltzman one of a select group to have a scientific principle named after him.

Despite the scientific status of risk compensation and the Peltzman Effect, Holman Jenkins shows no sign of having heard of it. He speaks of the “potential encouragement” offered by air bags to more aggressive driving by motorists as if he had just exhumed a Stone Age cave and stumbled upon a rectangular version of the wheel therein. And he applies the principle to air bags with no apparent awareness of its equal applicability to seat belts.

The Perils of Mandatory Safety

 

“By now,” Jenkins laments, “those of a certain age remember that Detroit was the villain that opposed putting explosive devices in their vehicles, plumping instead for mandatory seat-belt laws (which, amazingly, certain safety groups opposed).” No economist is surprised that Detroit opposed the idea of being forced to increase the cost of production by providing a safety benefit that (a) consumers didn’t want and (b) didn’t work, which would (c) expose them to endless litigation as well as threaten them personally. Seat belts were several orders of magnitude less expensive and the loss of freedom to consumers did not represent a business loss to automobile companies.

Jenkins’ failure to understand the opposition to mandatory seat-belt laws is astonishing, though, since it is based on the very same principle that he just invoked to oppose air bags. There is a lot to be said for seat belts when provided as a voluntary option for consumers. There is everything wrong with mandatory seat-belt laws because they encourage (force?) risk-loving drivers to obey the law by buckling up, then to fulfill their love of risk by driving more aggressively – and to do this as a substitute for going unbuckled in the first place. An unbuckled risk-lover is a driver who is himself bearing the risk he chooses to run. A buckled-up aggressive driver is a risk lover who is imposing the risk he chooses to run on other drivers – and pedestrians and cyclists – who may be more risk averse. This is bad theoretically because it is economically inefficient. Economic inefficiency is bad in the practical sense because it misaligns cost and benefit. In this case, it allows risk lovers to benefit from the risks they run but imposes some of the costs on other people who don’t benefit because they are risk-averse individuals who didn’t want to run those risks in the first place.

The practical side of all this has been seen is various ways. New Hampshire is the only state that hasn’t passed a mandatory seat-belt law in the interval since the mid-1980s. It has poorer-than-average weather and topography, so we would expect to it to have worse-than-average traffic-fatality results, all other things equal. Since traffic-safety expert predicted that mandatory seat-belt laws would usher in traffic-safety nirvana by reducing fatalities hugely, we would expect to find that New Hampshire highways had become a veritable slaughterhouse – if mandatory seat belts were the predicted panacea, that is.

Instead, New Hampshire traffic statistics have improved to near the top of the national rankings despite its singular lack of a mandatory seat-belt law. New Hampshire should be the poster-state for mandatory seat-belt laws; instead, it is the smoking gun that points to their guilt. This fact has gone completely unremarked in the national news media, which is probably why Holman Jenkins hasn’t noticed it.

But there is no excuse for Jenkins’ failure to notice the slowing improvement in nationwide traffic statistics that occurred along with the installation of air bags and mandatory seat-belt laws. The rate of fatalities per million vehicle miles driven has been falling since the 1930s and the growth of modern automobiles, highways, safety methods, signage and improved quality control in production and repair. The federal highway safety bureaucracy makes a point of announcing the yearly fatality data because it usually represents a recent low point. What they fail to announce is the slowing rate of decline. Indeed, recently fatalities have actually risen in spite of the poor economy and less auto travel.

Jenkins apparently considers himself daring for suggesting that air bags are counterproductive and should be eliminated. He cites the myriad of safety innovations that have come on line in the last few years: automatic lane-violation warning devices, automatic skid-correction devices, automatic collision avoidance and braking sensors, automatic stabilizers and design features that direct crash energy away from passengers. “One imponderable is how much faster progress might have been without the bureaucracy’s forced diversion of industry capital to air bags … Each stride tilts the calculation away from having an IED in the dashboard as a net benefit to motorists, bringing closer the day when a new safety innovation will be announced: an air bag-free vehicle.”

Actually, Jenkins’ repudiation of air bags and reaffirmation of mandatory seat belts puts him about 45 years behind the times – about where we stood before Sam Peltzman wrote in 1975. Jenkins deserves credit for daring to break out of the regulatory mindset by opposing air bags, something other journalists have failed to do. That indicates the intellectual depths to which the downward market spiral of journalism has taken us.

What Jenkins should be doing is calling for abolition of the Department of Transportation, not air bags. Consider this: According to Jenkins’ own logic, the DOT mounted a nationwide campaign for mandatory seat-belt laws while also insisting upon mandatory air-bag installation in vehicles. But this is crazy. Using the language of game theory, we would say that the presence of air bags “dominates” seat-belt use, making it superfluous. With an air bag, one of two things happens: the air bag deploys as intended – in which case the passenger is protected in the accident – or the air bag explodes – in which case the passenger is maimed or killed. Either way the seat belt is superfluous. Wearing a seat belt doesn’t add protection if the air bag works and doesn’t protect against shrapnel if the air bag explodes. There is also a third possibility: the air bag might explode prematurely, killing or maiming the passenger even though there is no accident. And the seat belt is superfluous in this case as well.

Although shouldn’t be too hard on Holman Jenkins, we shouldn’t feel bound by his intellectual limitations or his inhibitions. Now that we know that both mandatory air bags and mandatory seat-belt laws are abominations enacted in the name of automobile safety, what are we to make of a federal-government safety bureaucracy that insists upon them even after their demonstrated failures? And with the technology of self-driving cars a demonstrated reality, what are we to think when that same bureaucracy is distinctly reluctant to allow it to proceed?

Why Does DOT Tend to Hinder Rather Than Promote Automobile Safety?

The heading for this section will anger many readers. The conventional view of federal regulation has been described by the late Nobel laureate James Buchanan as the “romantic” view of government. Roughly speaking, it is that government regulators act nobly and altruistically in the public interest. Upon very close examination, the term “public interest” will be found so vague as to defy precise definition. However, this is advantageous in practice, as it allows each user to define it according to his or her individual desires – it makes the theory of government into a sort of fairy-tale, wish-fulfillment affair. No wonder this approach has survived so long with so little clear-eyed scrutiny! Everybody is afraid to look at it too hard for fear that their fondest dreams will go up in smoke. And indeed, that is what actually happens when we try to put this theory into practice.

Suppose we depart from this sentimental approach by inquiring into the incentives that confront bureaucrats in the Department of Transportation (DOT). First, ask what happens if DOT develops an innovative safety technology that saves the lives of consumers. Let’s say, for example, that they develop an improved seat belt, such as the three-point seat belt which turned the failed two-point lap belt into a viable safety device. Will the individual researcher(s) in DOT get a bonus? Will he or she (or they) patent the device and earn substantial royalties? Will they become famous? The answers are no, no and no, respectively. Thus, there are no positive incentives motivating DOT to improve automobile safety.

On the other hand, suppose DOT does just the opposite. Suppose it actually worsens auto safety. Indeed, suppose DOT does exactly what the political Left routinely accuses capitalist businessmen of doing; namely, kills its “customers” (in DOT’s case, this would be the consumers who are the ostensible beneficiaries of regulation).

What an irritating, outrageous question to pose! We all know that government regulatory agencies exist to protect the public, so it is unforgivably irresponsible to suggest that they would actually kill the people they are supposed to protect. But – let’s face it – that is exactly what Holman Jenkins is implying, isn’t it? He never has the cojones to blurt it out, but the statements that “Washington has been responsible for research” and “air bags designed to meet the government’s criteria were shown to be responsible for the deaths of dozens of children and small adults in otherwise survivable accidents” don’t leave much to the imagination, do they? As it happens, there is plenty more dirty linen in the government’s closet that Jenkins leaves unaired.

As long as everybody else is as deferential (or as cowardly) as Jenkins, the general public will not link government with the deaths in the way that private businesses are linked with the deaths of consumers. When more consumers die, what happens is this: government benefits. DOT uses this as the excuse to hire more people, beef up research and spend more money. Larger staffs and bigger budgets are the bureaucratic equivalent of higher profits, but this differs from the private-sector outcome in that higher profits are normally associated with better outcomes for consumers while, if anything, the reverse is true of bureaucratic expansion in government.

Suppose DOT were to recommend that we proceed at breakneck speed to adopt driverless cars in order to eliminate virtually all of our current 30,000+ annual highway fatalities. Suppose the agency even brings about this outcome within just a few years. There would be little or nothing left for the agency to do; it would have succeeded so well that it would have innovated itself out of existence. No wonder that DOT is dragging its feet to slow the acceptance of driverless cars!

In the private sector, there is an incentive to solve problems. In government, there is never an incentive to solve problems because that will usually leave government with no excuse to exist, to grow and expand. When a private firm solves a problem, it makes a big pile of money that it can use to expand or enter some new line of business – even if the solution to the problem leaves it with no reason to continue producing its current product. There is no government analogue to this reward and consequently no incentive for government to succeed, only incentives for it to fail. Indeed, there are even incentives for it to do harm. And in the arena of automobile safety, that is exactly what it has done.

Just to reinforce the point, let’s generalize it by broadening our evidentiary base beyond federal regulation. Earlier we cited various numerous safety improvements that are being incorporated piecemeal into automobiles by the major auto companies: lane-violation detection, automatic braking, collision avoidance and others. Driverless cars include all of these and more besides. The state of California has recently passed regulatory legislation forcing all driverless cars to allow a human driver to “take over in an emergency;” e.g., bypass the sensors that govern the driverless car’s actions. But every one of the safety improvements listed was designed expressly to produce mistake-proof behavior by the car in various emergency situations. In other words, the regulatory legislation has the effect of defeating the safety purpose of the driverless car. Oh, some nobler, more romantic rationale is advanced, but that is the effective result of the law.

The case of the DOT is not unique at the federal level either. Hundreds of thousands of corpses could attest to the harm the FDA has done by blocking the approval of new drugs. Many economists could, and have, detail the harm done to competition by application of the antitrust laws ostensibly designed to preserve and protect it.

Economists are the real investigative reporters. Most of the time, their tools consist of logic and arithmetic rather than confidential informants and leaked documents. But when it comes to exposes, their stories put those of journalists in the shade.

DRI-221 for week of 12-8-13: What’s (Still) Wrong with Economics?

An Access Advertising EconBrief:

What’s (Still) Wrong with Economics?

Taking stock is an end-of-year tradition. This space devotes the remainder of the year to explaining the value of economics, so it’s fitting and proper to don a hair shirt and break out the penance whips as 2013 fades into the distance. What’s wrong with economics? Why doesn’t its productivity justify its title of queen of the social sciences – and what could be done about that?

This omnibus indictment demands an orderly presentation, organized by subject area.

Teaching: Although the motto of the Econometric Society is “science is measurement,” a better operational definition is “science is knowing what the hell you’re talking about.” On that score, economics has a lot to answer for. A science is only as good as its practitioners, who regurgitate what they are taught. Teaching is the first place to lay blame for the shortcomings of economics as a science.

In the past, economics has seldom been taught at the secondary level. That is changing, but only slowly. The subject is so difficult to master and absorption is such an osmotic process that an early start would vastly improve results. It would also force an improvement in the standard mode of teaching.

At the college level, economics is taught by teaching the same formal theory that Ph. D. students are required to master. Granted, college freshmen begin at the most basic level using far simpler tools, but they learn the same techniques. As the successful business economist Leif Olsen (among others) has pointed out, the tacit premise of college economics instruction is that all students will go on to study for their doctorate in the subject.

That is absurd. It forces textbooks to concentrate on force-feeding students bits (or chunks) of technique, supposedly to insure that all students are exposed to the tools and reasoning used by working economists. The use of the word “exposed” in this context should call to mind a disreputable man clothed only in a raincoat, accosting impressionable females in a public park. That captures both the thoroughness and duration of the exposure to each technical refinement, as well as the depth of understanding and relative appeal to the emotions and intellect on the part of the students.

What is needed here is textbooks and teachers that cover much less ground but do it much more thoroughly. Only a tiny fraction of students seek, let alone obtain, the Ph. D. The rest need to grasp the basic logic behind supply and demand, opportunity cost and the role of markets in coordinating the dispersed knowledge of humanity. This requires intensive study of basics – something that would also benefit today’s eventual doctoral candidates, many of whom never learn those basics. The only textbook serving this need that comes quickly to mind is The Economic Way of Thinking, by the late Paul Heyne.

In addition to the benefits accruing to undergraduate education, other advantages would follow from this superior approach. As it now stands, graduate students in economics are hamstrung by the subject’s austere formalism. The mathematical approach is now so rigorous at the highest levels of economics that the subject bears a stronger resemblance to engineering or physics than to the political economy practiced by classical economists in the 18th and 19th centuries. If this so-called rigor added value in form of precision to the practice of economics, it would be worth its cost in pain and hardship.

Alas, it doesn’t. Even worse, graduate students have to spend so much time grappling with mathematics that they lack the time to absorb the basic elements underlying the mathematics. Often, the mathematical models must eliminate the basic elements in order make the mathematics tractable. We are then left with the anomaly of an economic theory that must truncate or amputate its economic content in order to satisfy certain abstract scientific criteria. This obsession with formalism has substituted bad science for good economics – the worst kind of tradeoff.

The reader might wonder who benefits from the status quo, since beneficiaries have not been evident in the telling thus far. The current system creates a narrow road to academic success for career economists. They must fight their way through the undergraduate curriculum, then labor as part-time teachers and research assistants while taking their own graduate courses. Writing the Ph. D. dissertation can take years, after which they have a short time (usually six years) in which to write publishable research and get it placed in the small number of peer-reviewed economics journals. If they succeed in all this, they may end up with tenure at an American university. This will entitle them to job security and opportunity for advancement and a sizable income. If they fail – well, there’s always the private sector, where a small number of economists attain comparable career success. It is the survivors of this process, the tenured faculty at major American colleges and universities, who benefit from the system as it exists.

Perhaps this privileged few are an extraordinarily productive lot? Well, there are a tiny handful of the professoriate who produce research output that might reasonably be classed as valuable. Most articles published in professional journals, though, are virtually worthless. Nobody would pay any significant money to sponsor them directly. That’s not all. In addition to the arid mathematics employed by the theoretical research, there is also the statistical technique used to generate empirical articles. For several decades, the primary desideratum in statistical economics has been to obtain “statistically significant” results between the variable(s) in the economic model and the variable we are trying to understand. If questioned about this, the average person would probably define this criterion as “a large enough effect or impact to be worth measuring, or large enough to make us think what we are measuring has an important influence on what we are studying.”

Wrong! “Statistical significance” is a term of art that means something else – something that is more qualitative than quantitative. Essentially, it means that there is a likelihood that the relationship between the model variable(s) and the variable of interest is not due to random chance but is, rather, systematic. Another way of putting it would be to say that statistical significance answers a binary, “yes-no” question instead of the question we are usually most interested in. The big question, the one we most want the answer to, is usually a “how much” question. How much influence does one variable have on another; how great is the importance of one variable on another? The question answered by statistical significance is interesting and useful, but it is not the one we care about the most. Yet it is almost the only one the social sciences have cared about for decades. And, believe it or not, it is apparent that many economists do not even realize the mistake in emphasis they have been making.

Yet it is not the small number of beneficiaries or even their ghastly mistakes that indicts the current system. Rather, it is economic theory itself, which insists that people benefit from consumption rather than production. It is consumers of economics – students and the general public – who should be reaping rewards. The benefits earned by tenured professors are not bad if they are earned by providing comparable benefits to consumers rather than merely reaping monopoly profits from an exclusionary process. But students are lowest on the totem pole on any major university campus. Tenured faculty members teach as little as possible, usually only two courses per semester. Teaching is little rewarded and often poorly done by tenured and non-tenured faculty alike. Academic lore is filled with stories of award-winning teachers who neglected research for teaching and were dumped by their university in spite of their teaching accomplishments.

The late Nobel laureate James Buchanan characterized the position of academic economists today to “a kind of dole;” that is, they are living off the taxpayer rather than earning their keep. Administrators are fellow beneficiaries of the system, although they are pilot fish riding the backs of all academicians, not merely economists.

The Public: Consumers of economics include not merely those who study the subject in school but also the general public. Economists advise businesses on various subjects, including the past, present and future level of economic activity overall and within specific sectors, industries and businesses. They provide expert witness services in forensics by estimating business valuation, damage and loss in litigation, by representing the various parties in regulatory proceedings and particularly in antitrust litigation. Economists are the second-most numerous profession in government employment, behind lawyers.

For some seventy years, economists have played an important role in the making of economic policy. One might expect that economists would play the most important role; who is qualified to decide economic policy if not economists? In fact, modern governments place politicians and bureaucrats ahead of everybody when it comes to policymaking regardless of expertise. This has created a situation in which we were better off with no economic policy at all than with an economic policy run by non-economists. Still, the recent efforts of professional economists do not paint the profession in a favorable light, either.

The problem with public perception of economics and economists is that they have come to regard economics as synonymous with “macroeconomics;” that is, with forecasting and policymaking aimed at economic statistical aggregates like employment, gross domestic product and interest rates in the plural. This is the unfortunate byproduct of the Keynesian Revolution that overtook economics in the 1930s and reigned supreme until the late 1970s. The overarching Keynesian premise was that only such an aggregative focus could cure the recurrent recessions and depressions that Keynesians ascribed to the inherent instability and even stagnation of a private economy left to its own devices.

It is ironic that every premise on which Keynes based his conclusions was subsequently rejected by the four decades of extensive and intensive research devoted to the subject. It is even more ironic that the conclusion reached by the profession was that attention needed to be focused on developing “microfoundations of macroeconomics,” since it was the very notion of microeconomics that Keynes rejected in the first place. And the crowning irony was that, while Keynes ideas filtered down into the textbook teaching of economics and even into media presentation of economic news and concepts to the general public, the rejection of Keynesian economics never reached the news media or the general public. Textbooks were revised (eventually), but without the fanfare that accompanied the “Keynesian Revolution.”

So it was that when the financial crisis of 2008 and ensuing Great Recession of 2009 reacquainted America with economic depression, Keynesian economists could reemerge from the subterranean depths of intellectual isolation like zombies from a George Romero movie without triggering screams of horror from the public. Only those with very long memories and a healthy quotient of temerity stood up to ask why discredited economic policies had suddenly acquired cachet.

When the Nobel Foundation began awarding quasi-Nobel prizes for economics in the late 1960s, a good deal of grumbling was heard in the ranks of the hard sciences. Economics wasn’t a real science, they maintained stubbornly. A real science is cumulative; it creates a body of knowledge that grows larger over time owing to its revealed truth and demonstrated value in application. Economics just recycles the same ideas, they scoffed, which go in and out of fashion like women’s hemlines rather than being proved or disproved.

From today’s vantage point, we can see more than just a grain of truth in their disparagement – more like a boulder, in fact. What macroeconomist Alan Blinder referred to in a journal article as “the death and life of Keynesian economics” is a perfect case in point. Keynesian economics did not arise because it was a superior theory – research proved its theoretical inferiority. Not only that, it took decades to settle the point, which doesn’t exactly constitute a testimonial to the value of the subject or the lucidity of its doctrines. Keynesian economics did not triumph in the arena of practical application; that is, countries did not eliminate recessions and depressions using Keynesian policies, thereby proving their worth. Just the opposite; after decades of pinning his hopes on Keynesian economics, the British Labor Party leader James Cavenaugh renounced it in a celebrated denunciation in the mid-1970s.

No, Keynesian economics made a comeback because it was politically useful to the Obama administration. It enabled them to spend vast amounts of money and direct the spending to political supporters on the pretext that they were “stimulating the economy.” If economics had to justify its existence by pointing to the results of “economic policy,” economists would be thrown out into the street and forbidden to practice their craft.

In the early 1960s, Time Magazine put John Maynard Keynes on its cover and proclaimed the death of the business cycle. This obituary proved to be premature. Like Icarus, economists tried to fly too high. Their wings melted by the solar heat, the profession is now in freefall, putting up a bold front and proclaiming “so far, so good” as they plummet to Earth. The only remedy for this hubris is to straightforwardly admit that economics is not a hard, quantitatively predictive science in the mold of the natural sciences. Its fundamental insights are not quantitative at all but they are absolutely vital to our well-being. When combined with such other social sciences as law and political science, economics can explain patterns of human behavior involving choice. It can unlock the key to human progress by making the knowledge sequestered in billions of individual brains accessible in useful form for the mutual benefit of all. Thanks to economics, billions of people can live who would die without its insights. These benefits are anything but trivial.

Economics can even ameliorate the hardships imposed by the business cycle, as long as we do not expect too much and can resign ourselves to occasional recessions of limited length and severity. In this regard, success can be likened to hitting home runs in baseball. Trying to hit home runs by swinging too hard usually doesn’t work; making solid contact is the key to hitting homers. Many great home-run hitters, including Hank Aaron and Ernie Banks, were not large, powerful men who swung for the fences. They were wiry, muscular hitters who hit solid line drives. The economic analogue of this philosophy is to allow free markets to work and relative prices to govern the allocation of resources rather than trying to use government spending, taxes and money creation as a bludgeon to hammer the efforts of markets into a politically acceptable shape.

Remedies: In thinking about ways to right its wrongs, economics should take its own advice and fall back on free markets. Rather than trying to administratively reshape the academic status quo and tenure-based faculty system, for example, economists should simply support privatization of education. This is simply taking current professional support of tuition vouchers and charter schools to the next logical level. Tenure is a protected academic monopoly, unlikely to survive in a free private market. If it does, this will mean that it has unsuspected virtues; so much the better, then.

Recent decades have seen the rise of applied popular economics books written to bring economics to the masses. The best-known and most popular of these, Freakonomics, is among the least useful – but it is better than nothing. Better works have been submitted by economists like Steven Landsburg (The Armchair Economist) and David Friedman. Their worthy efforts have helped to turn the tide by correcting misapprehensions and redirecting focus away from macroeconomics. This is another good example of reform from within the profession that does not require economists to sacrifice their own well-being.

Perhaps the one missing link in economics today is leadership. Revolutions in scientific theory and practice are typically effected by individuals at the head of scientific movements. In economics, these have included men like Adam Smith, David Ricardo, Karl Marx, the Austrian economists of the 19th century, Alfred Marshall, Keynes and Milton Friedman. Today there is a leadership vacuum in the profession; nobody with the intellectual stature of Friedman remains to take the lead in reforming economics.

Given the woes of economics and economic theory, a new candidate seems unlikely to come riding over the horizon. It may be that economists will have to prop up an intellectual giant of the past to ride like El Cid against the ancient foes of ignorance, apathy, prejudice and vested interest. There is one outstanding candidate, the man who saved the 20th century in life and whose wide-ranging thought and multi-disciplinary theory is alone capable of midwiving a new sustainable economics of the future. That would be F.A. Hayek. Recent stirrings within the profession suggest a growing acknowledgment that Hayek’s economics have been too long neglected and explain the crisis, recession and current stagnation far better than anything offered by Keynes or his followers. There is no better body of work to serve as a model for what is wrong with economics and how to correct it than his.

DRI-293 for week of 3-3-13: The Sequester: A Barack H. Obama Production

An Access Advertising EconBrief:

The Sequester: A Barack H. Obama Production

The appearance of First Lady Michelle Obama as presenter of the climactic Best Picture Academy Award at the recent Oscar ceremony is the latest sign of the symbiosis between American politics and Hollywood. The convergence between the political and entertainment industries is now so close that we can use the same economic model to analyze them.

Since both industries are popular and objects of public scrutiny, this model will have great practical value. Its first application will be to analyze the sequester, the current political-theater production now enjoying its first run on popular media throughout the nation.

The Model

The late Nobel-Prize-winning economist James Buchanan campaigned tirelessly against what he called the “romantic view” of government as the promoter of the “public interest.” Government is composed of particular individuals. In order to be operational, the concept of the “public interest” must be comprehensible to those people. If the activities of government were limited only to those whose net benefits were positive for everybody, it would be a miniscule fraction of its present size. Clearly, the actual purposes of government are redistributive. But what unique redistributive plan could possibly command unanimous support from the bureaucratic minions of government? The only conceivable answer is that bureaucrats serve their own interests, presumably having convinced themselves that their interest and the public interest coincide.

Government bureaucrats thus share a common goal with private business owners. But whereas private businesses produce goods and services in markets under the discipline of market competition, governments provide only executive, legislative and judicial services while contracting out for the production of and needed goods and services. Far from submitting to the discipline of competition, governments claim monopoly privileges for themselves and dispense them to others – often in exchange for political support.

From inception until the gradual disintegration of the studio system of moviemaking, Hollywood operated under the marketplace model of competition. When adverse antitrust decisions in the 1940s killed the long-term viability of the giant studios, movies changed their way of living. This drift away from competitive capitalism accelerated over the last two decades. Today, the approach of government and Hollywood to production is remarkably similar.

Private businesses produce goods and services in order to satisfy the demand of consumers. They satisfy consumer demand in order to earn profits and maximize the profit of their owners. Thus, both sides of the market strive to maximize their real income or utility through the consumption of goods and services. Consumers act directly when purchasing for their own consumption or saving for their future consumption. Producers act indirectly when producing for the consumption of others or directly when producing for themselves. Input suppliers act indirectly by supplying labor and raw materials to producers to facilitate production and consumption for others.

Governments cannot act as private businesses do because their bureaucrats are not spending their own money and taxpayers have no effective leverage over them. Bureaucrats serve their own interests – which are those of the politicians who control their fate. Politicians, in turn, most want to retain their hold on office. For the most part, this is accomplished by redistributing money in favor of those who vote for them. Since government has little or no power to increase the supply of goods and services but considerable power to reduce it, redistribution is accomplished predominantly by harming some people while purporting to help others.

We know that private production is beneficial because consumers voluntarily choose from among many competing products in a free marketplace in which producers can enter and leave at will. The existence of prices allows everybody to incrementally assess the value of every unit of input and output to insure its net benefit before purchase. Profit directs the flow of resources to areas of greatest value to consumers.

None of these safeguards applies to political production. In government, the principle of coercion replaces voluntary choice. No profits exist to tell bureaucrats whether they have succeeded or erred. No prices direct the incremental flow of resources and no competition is allowed to provide an alternative to government provision of goods and services. Sure, voting does take place. But the notion that a one-time choice between a restricted field of two candidates can somehow take the place of millions of everyday choices made under vastly better marketplace circumstances is quaint, if not utterly ridiculous.

How Hollywood Has Come to Resemble Politics

More and more, Hollywood production has come to resemble political production. This evolution has accelerated during the last two decades.

Under the old studio system, motion-picture production often left the confines of Hollywood in favor of distant locations. This was sometimes motivated by concern for production values, as when director John Ford sought the scenic vistas of Monument Valley, Utah for his revival of the Western genre in the 1939 film Stagecoach. Increasingly, however, economics lay behind the decision of producers to abandon Hollywood in favor of locations in the eastern U.S., Canada, Mexico, Spain or elsewhere in Europe. Hollywood production was hamstrung by inefficient work rules established by Hollywood craft unions under the sway of organized crime. It became far cheaper to incur heavy travel costs to foreign locations than to bear the costs of a Hollywood shoot.

That was back in the day when Hollywood still operated under the rules of economics that govern private markets. Today, every state of the Union has a state-level “department of economic development.” These Orwellian entities are distinguished by their lack of adherence to economic principles. In particular, they offer subsidies to private businesses for locating and operating within the state. In the case of motion pictures, this takes the form of subsidies to production companies that shoot movies in-state. The rationale for this activity is almost always a purported “multiplier benefit” to the location’s “economy.”

A subsidy is the opposite number of a tax. Both drive a wedge between the price paid by the buyer and that received by the seller; both are inefficient actions with adverse effects on production and consumption. Whereas a tax causes too little of the taxed good to be produced and consumed, a subsidy causes too much production and consumption of the good affected and too little production and consumption of other things. State agencies justify their actions by ignoring their bad results in favor of the supposed good effects.

The most highly touted benefit of movie-location subsidies is “job creation.” Even under the studio system, it was standard operating procedure for casting directors to scour the rolls of local actors to play subordinate parts, rather than pay travel expenses and higher salaries of Hollywood actors. The principal cast, whose work comprised the guts of the movie, was chosen on the basis of star power and acting ability. This was basic economics at work. Today, however, the pretense that subsidies are necessary to insure work for locals and keep local industry alive is another way in which Hollywood has abandoned economics for politics. Movie subsidies are directly analogous to protective tariffs (taxes) levied on foreign goods to make their prices higher than local prices, thus protecting the jobs of local workers.

There is no economic value in creating or protecting jobs because the end-in-view in all economic activity is consumption, not production. The idea is to provide the best combination of output quantity and quality. The implication behind job creation – rarely stated outright but unmistakable – is that our goal should be to maximize the quantity of human labor employed in producing output, rather than to produce the most and best output. This suggests that the profession of economics should hold up ancient Egypt as its model state. The production of pyramids using slave labor may be the best means ever devised for maximizing the number of human beings doing work and eliminating unemployment. (The slave-labor camps in the old Soviet Union’s Gulag Archipelago are a legitimate contender for the title, but lose out on the grounds that they produced comparatively little tangible output and services.)

Since the general idea is to make people as happy as possible, though, we can rule out “job creation” as our lodestar. The reason it is such a popular political goal despite its economic drawbacks is that it concentrates benefits heavily on a group of easily identifiable people who can readily recognize and gauge their gains. The beneficiaries of a job-creation policy are a good bet to vote for their benefactor.

Another prime example of Hollywood’s shift from economic to political priorities is the re-ordering of the bottom line. The mainstream media still behaves as though the success or failure of a movie depends on its box-office receipts. This was certainly true throughout the 20th century, during the birth and development of the motion picture. But it is no longer true today.

Most movies today are conceived or at least approved by the “talent” – stars, writers, directors and their agents. Studios are coordinating and marketing vehicles. The astronomical fees commanded by the talent, together with high labor and insurance costs, make it prohibitively expensive to make most movies. The only way turn a profit is by marketing ancillary products to young customers. Most movies lose money at the box office and are subsidized by ancillary revenues and (as the studio level) the occasional box-office blockbuster.

The shift in priorities away from the box office has allowed the talent to cater to their own tastes in choosing the subject matter of movies. Under the studio system, the preferences of the audience were worshipped by movie-studio moguls like Louis B. Mayer, Irving Thalberg, Harry Cohn and Darryl Zanuck. Many of the moguls were immigrants and Jews who had strong opinions and might have loved to indulge their own tastes. Instead, they ruthlessly pruned the esoteric and controversial output of their directors, writers and stars because their instincts sensed that public tastes would not embrace it. Now Hollywood’s implicit motto is “the public taste be damned” – an attitude it would condemn unhesitatingly were it struck by a private industry producing hula hoops, automobiles or soap. This allows the talent to freely indulge their political preferences on screen.

Hollywood’s bias has long been to the Left. The Obama administration is now busily engaged in centralizing as much production as possible under the aegis of government – executive, legislative, judicial and regulatory. The case of Solyndra is a representative example of the results. Large subsidies were given for the production of an alternative energy facility. Market demand was unfavorably disposed toward the company’s output and it lost money hand over fist. But ancillary considerations – in this case, the ostensible necessity for the gestation of alternative energy production – outweighed the losses in the disposition of funds.

Losses, subsidies and the substitution of personal priorities for those of consumes has long characterized political production. But now it describes Hollywood, too.

How Politics Has Come to Resemble Hollywood

In the early 1950s, veteran actor and movie star Robert Montgomery was asked to tutor President Dwight D. Eisenhower on the fundamentals of spoken communication to improve Eisenhower’s performance in televised speeches and news conferences. Much was made of this intrusion of Hollywood into the pristine, public-spirited world of politics. The election of Ronald Reagan as Governor of California and U.S. President led to his subsequent anointing as the “Great Communicator” – a title that was given a pejorative cast by his critics on the Left. While these episodes may have painted the Oval Office with a show-business veneer, they hardly tell a story of Faustian corruption.

Today, however, candidates are chosen on the basis of qualities associated with movie stars rather than statesmen. Would a candidate as homely as Lyndon Johnson or with the profile of William Howard Taft even bother to register for the Presidential primaries? Journalists have expressed a public longing to sleep with Bill Clinton and Barack Obama, even though political scientists have never ranked amatory skill among the vital attributes of a Chief Executive. The candidacy of Mitt Romney was widely felt to be fatally handicapped by his biography, as if a Presidency were a movie that needed suitable first and second acts to set the stage for a dramatic finish.

Movies are an emotional medium rather than an intellectual one. Their narrative form is highly stylized, based on that of the theater. Movie scripts can be divided into first, second and third acts. There is a (preferably heroic) protagonist, who wages a conflict with one or more villains during the course of the movie. The protagonist undergoes a transformative experience and emerges better for it. There is a climactic resolution of the conflict.

Today, politicians structure campaigns and issues in this manner. They cast themselves as the hero. They demonize their political opponents as villains. And, most importantly, they appeal to the emotions of voters rather than to their intellect.

The timing of announcements, and sometimes even the substance of policies, is determined by “optics” – the snap judgments and emotive reactions of the public. The weight of issues is measured by their standing in opinion polls rather than by their impact on the real incomes of citizens. Like motion pictures, politics has become a purely emotional business in which objective truth is completely overshadowed by subjective perception.

The Sequester: A Barack H. Obama Production

Now we are engaged in a great civil debate on the issue government spending. It will ultimately determine whether our nation – or any nation so constituted – can long endure. The opening volley in that debate has been fired by President Obama himself. But it has not been launched in the rhetorical tradition of intellectual inquiry and contention. Instead, it has been presented as a production of political theater – a Barack H. Obama production. Its title is: “The Sequester.”

In 2011, the Obama administration and Congressional Republicans fought a symbolic struggle over the raising of the debt limit. In order to orchestrate a victory over Republicans, the President crafted the sequester. The word “sequester” means “to set apart, segregate, or hand over (as to a trustee).” That refers to funds in the budget that were removed from consideration for spending purposes. In return for agreement to raise the debt limit, the President met Republicans halfway by agreeing to spending reductions in the form of sequestration.

Now, in 2013, when the time to follow up on his promise has come, President Obama has rewritten the script. He has recast himself as the hero and Republicans as villains in a melodrama in which spending reductions threaten hardship and economic setback. The original terms of the sequester called for $1.2 trillion in spending reductions spread over 10 years, averaging out to around $120 billion per year in reductions.

The actual reduction for 2013 would be about $85 billion. But there is more to the story. First, the cuts come only from so-called discretionary spending; entitlement programs like Social Security and Medicaid are unaffected. Second, the $85 billion figure reflects a reduction in budgetary authority – the statutory authorization to spend. Actual reduction in government outlays is projected to be only half the $85 billion total, or about $42 billion. The difference is accounted for by “baseline budgeting,” the notorious government budgetary practice that automatically increases expenditures every year. When the budget is ruled by the implicit logic that government spending is always good and a growing country will always need more of it from year to year, it is easy to grasp why the federal government is swimming in a sea of debt.

The biggest chunk of sequestration (about half of the authorized total) is slated to come from military expenditures. The remainder is sprinkled more or less equally throughout the federal discretionary budget, with the proviso that it should be distributed to cause the most pain to the populace. Does that sound like a pejorative characterization? No, The Wall Street Journal cited a memo to precisely that effect. Perhaps the most telling index of the melodramatic nature of this Barack H. Obama production came from a White House memo announcing that free tours of the White House would be cancelled until further notice due to “staffing reductions” caused by the sequester. As various bloggers hastened to point out, the tours are conducted by volunteers.

The President’s exercise in political theater contained many other dramatic high points. A White House fact (!) sheet stated that federal programs like Meals On Wheels would serve 4 million fewer meals thanks to the sequester. The document also claimed that 70,000 youngsters “would be kicked off Head Start,” the subsidy program for pre-school education, thanks to the sequester – a claim backed up by Health and Human Service Secretary Kathleen Sebelius. White House Press Secretary Jay Carney expressed grave concern for federal-government janitors who would receive less overtime pay because of the sequester. Department of Education Secretary Arne Duncan made headlines by declaring that there are “literally now teachers who are getting pink slips,” a whopper so outrageous that he was forced to retract it within 24 hours. Not to be upstaged by his supporting cast, the President himself gravely warned that federal prosecutors “will have to let criminals go” if the sequester is allowed to proceed.

The public is accustomed to seeing movies tell lies in the service of dramatic effect. That is exactly what this Barack H. Obama production does. Like many popular movies, it has borrowed its storyline from other successes. For over three decades, state government legislatures have faced laws – such as Missouri’s Hancock Amendment – limiting state-government spending. The standard legislative tactic of opponents is to concoct a fantasy wish-list of worst-case spending reductions designed to terrify voters into repealing the laws. In fact, the laws say nothing about specific spending cuts. They allow the legislators themselves the flexibility to choose which spending to cut. The legislators are supposed to cut the most wasteful, redundant spending and retain only vital programs – assuming there are any. Yet in practice, the legislators do just the opposite – they pick the most painful cuts in order to blackmail voters into spending ad infinitum.

That tactic, straight from the playbook of radical activist Saul Alinsky, is the plotline of “The Sequester.” It makes no sense. When air-traffic controllers went on strike in 1981, President Ronald Reagan protected consumers, who were otherwise helpless against the threat posed by a government monopoly. He fired the striking controllers and hired replacements. Are we confronted by angry restaurant owners who threaten to close up unless we spend more money dining out? Of course not; the restaurant industry is competitive. Strikers would simply lose business to competitors who would step up to serve consumers. But government monopoly employees can successfully hold taxpayers hostage unless the Executive branch fulfills its duty to protect the public. Instead, the Obama administration is siding with the blackmailers.

The Administration’s economic rationale for its actions is transparently absurd. Unofficial Administration economic advisor Paul Krugman hints darkly of 700,000 lost jobs and the CBO forecasts a loss of one-half point’s worth of economic growth – all due to a net reduction in discretionary spending of $42 billion. Yet the Administration absolutely demanded that the Bush tax cuts end on schedule, producing a much larger effect on “aggregate demand” by Keynesian economic lights. Krugman has consistently maintained that the 2009 stimulus of nearly $800 billion was not nearly large enough to produce marked effects, so how can he now bemoan this piddling spending reduction?

Movie plots are not supposed to make sense. They are structured for emotional impact only. Producers, directors and screenwriters are granted dramatic license to lie in order to manipulate our emotions. Their actors and actresses are expected to speak lines from a script in order to enact the drama.

This is what politics has become. It is political theater, dedicated to the proposition that government of itself, by itself and for itself, shall not perish from the Earth.