DRI-172 for week of 7-5-15: How and Why Did ObamaCare Become SCOTUSCare?

An Access Advertising EconBrief:

How and Why Did ObamaCare Become SCOTUSCare?

On June 25, 2015, the Supreme Court of the United States delivered its most consequential opinion in recent years in King v. Burwell. King was David King, one of various Plaintiffs opposing Sylvia Burwell, Secretary of Health, Education and Welfare. The case might more colloquially be called “ObamaCare II,” since it dealt with the second major attempt to overturn the Obama administration’s signature legislative achievement.

The Obama administration has been bragging about its success in attracting signups for the program. Not surprisingly, it fails to mention two facts that make this apparent victory Pyrrhic. First, most of the signups are people who lost their previous health insurance due to the law’s provisions, not people who lacked insurance to begin with. Second, a large chunk of enrollees are being subsidized by the federal government in the form of a tax credit for the amount of the insurance.

The point at issue in King v. Burwell is the legality of this subsidy. The original legislation provides for health-care exchanges established by state governments, and proponents have been quick to cite these provisions to pooh-pooh the contention that the Patient Protection and Affordable Care Act (PPACA) ushered in a federally-run, socialist system of health care. The specific language used by PPAACA in Section 1401 is that the IRS can provide tax credits for insurance purchased on “exchanges run by the State.” That phrase appears 14 times in Section 1401 and each time it clearly refers to state governments, not the federal government. But in actual practice, states have found it excruciatingly difficult to establish these exchanges and many states have refused to do so. Thus, people in those states have turned to the federal-government website for health insurance and have nevertheless received a tax credit under the IRS’s interpretation of statute 1401. That interpretation has come to light in various lawsuits heard by lower courts, some of which have ruled for plaintiffs and against attempts by the IRS and the Obama administration to award the tax credits.

Without the tax credits, many people on both sides of the political spectrum agree, PPACA will crash and burn. Not enough healthy people will sign up for the insurance to subsidize those with pre-existing medical conditions for whom PPACA is the only source of external funding for medical treatment.

To a figurative roll of drums, the Supreme Court of the United States (SCOTUS) released its opinion on June 25, 2015. It upheld the legality of the IRS interpretation in a 6-3 decision, finding for the government and the Obama administration for the second time. And for the second time, the opinion for the majority was written by Chief Justice John Roberts.

Roberts’ Rules of Constitutional Disorder

Given that Justice Roberts had previously written the opinion upholding the constitutionality of the law, his vote here cannot be considered a complete shock. As before, the shock was in the reasoning he used to reach his conclusion. In the first case (National Federation of Independent Businesses v. Sebelius, 2012), Roberts interpreted a key provision of the law in a way that its supporters had categorically and angrily rejected during the legislative debate prior to enactment and subsequently. He referred to the “individual mandate” that uninsured citizens must purchase health insurance as a tax. This rescued it from the otherwise untenable status of a coercive consumer directive – something not allowed under the Constitution.

Now Justice Roberts addressed the meaning of the phrase “established by the State.” He did not agree with one interpretation previously made by the government’s Solicitor General, that the term was an undefined term of art. He disdained to apply a precedent established by the Court in a previous case involving interpretation of law by administration agencies, the Chevron case. The precedent said that in cases where a phrase was ambiguous, a reasonable interpretation by the agency charged with administering the law would rule. In this case, though, Roberts claimed that since “the IRS…has no expertise in crafting health-insurance policy of this sort,” Congress could not possibly have intended to grant the agency this kind of discretion.

No, Roberts is prepared to believe that “established by the State” does not mean “established by the federal government,” all right. But he says that the Supreme Court cannot interpret the law this way because it will cause the law to fail to achieve its intended purpose. So, the Court must treat the wording as ambiguous and interpret it in such a way as to advance the goals intended by Congress and the administration. Hence, his decision for defendant and against plaintiffs.

In other words, he rejected the ability of the IRS to interpret the meaning of the phrase “established by the State” because of that agency’s lack of health-care-policy expertise, but is sufficiently confident of his own expertise in that area to interpret its meaning himself; it is his assessment of the market consequences that drives his decision to uphold the tax credits.

Roberts’ opinion prompted one of the most scathing, incredulous dissents in the history of the Court, by Justice Antonin Scalia. “This case requires us to decide whether someone who buys insurance on an exchange established by the Secretary gets tax credits,” begins Scalia. “You would think the answer would be obvious – so obvious that there would hardly be a need for the Supreme Court to hear a case about it… Under all the usual rules of interpretation… the government should lose this case. But normal rules of interpretation seem always to yield to the overriding principle of the present Court – the Affordable Care Act must be saved.”

The reader can sense Scalia’s mounting indignation and disbelief. “The Court interprets [Section 1401] to award tax credits on both federal and state exchanges. It accepts that the most natural sense of the phrase ‘an exchange established by the State’ is an exchange established by a state. (Understatement, thy name is an opinion on the Affordable Care Act!) Yet the opinion continues, with no semblance of shame, that ‘it is also possible that the phrase refers to all exchanges.’ (Impossible possibility, thy name is an opinion on the Affordable Care Act!)”

“Perhaps sensing the dismal failure of its efforts to show that ‘established by the State’ means ‘established by the State and the federal government,’ the Court tries to palm off the pertinent statutory phrase as ‘inartful drafting.’ The Court, however, has no free-floating power to rescue Congress from their drafting errors.” In other words, Justice Roberts has rewritten the law to suit himself.

To reinforce his conclusion, Scalia concludes with “…the Court forgets that ours is a government of laws and not of men. That means we are governed by the terms of our laws and not by the unenacted will of our lawmakers. If Congress enacted into law something different from what it intended, then it should amend to law to conform to its intent. In the meantime, Congress has no roving license …to disregard clear language on the view that … ‘Congress must have intended’ something broader.”

“Rather than rewriting the law under the pretense of interpreting it, the Court should have left it to Congress to decide what to do… [the] Court’s two cases on the law will be remembered through the years. And the cases will publish the discouraging truth that the Supreme Court favors some laws over others and is prepared to do whatever it takes to uphold and assist its favorites… We should start calling this law SCOTUSCare.”

Jonathan Adler of the much-respected and quoted law blog Volokh Conspiracy put it this way: “The umpire has decided that it’s okay to pinch-hit to ensure that the right team wins.”

And indeed, what most stands out about Roberts’ opinion is its contravention of ordinary constitutional thought. It is not the product of a mind that began at square one and worked its way methodically to a logical conclusion. The reader senses a reversal of procedure; the Chief Justice started out with a desired conclusion and worked backwards to figure out how to justify reaching it. Justice Scalia says as much in his dissent. But Scalia does not tell us why Roberts is behaving in this manner.

If we are honest with ourselves, we must admit that we do not know why Roberts is saying what he is saying. Beyond question, it is arbitrary and indefensible. Certainly it is inconsistent with his past decisions. There are various reasons why a man might do this.

One obvious motivation might be that Roberts is being blackmailed by political supporters of the PPACA, within or outside of the Obama administration. Since blackmail is not only a crime but also a distasteful allegation to make, nobody will advance it without concrete supporting evidence – not only evidence against the blackmailer but also an indication of his or her ammunition. The opposite side of the blackmail coin is bribery. Once again, nobody will allege this publicly without concrete evidence, such as letters, tapes, e-mails, bank account or bank-transfer information. These possibilities deserve mention because they lie at the head of a short list of motives for betrayal of deeply held principles.

Since nobody has come forward with evidence of malfeasance – or is likely to – suppose we disregard that category of possibility. What else could explain Roberts’ actions? (Note the plural; this is the second time he has sustained PPACA at the cost of his own integrity.)

Lord Acton Revisited

To explain John Roberts’ actions, we must develop a model of political economy. That requires a short side trip into the realm of political philosophy.

Lord Acton’s famous maxim is: “Power corrupts; absolute power corrupts absolutely.” We are used to thinking of it in the context of a dictatorship or of an individual or institution temporarily or unjustly wielding power. But it is highly applicable within the context of today’s welfare-state democracies.

All of the Western industrialized nations have evolved into what F. A. Hayek called “absolute democracies.” They are democratic because popular vote determines the composition of representative governments. But they are absolute in scope and degree because the administrative agencies staffing those governments are answerable to no voter. And increasingly the executive, legislative and judicial branches of the governments wield powers that are virtually unlimited. In practical effect, voters vote on which party will wield nominal executive control over the agencies and dominate the legislature. Instead of a single dictator, voters elect a government body with revolving and rotating dictatorial powers.

As the power of government has grown, the power at stake in elections has grown commensurately. This explains the burgeoning amounts of money spent on elections. It also explains the growing rancor between opposing parties, since ordinary citizens perceive the loss of electoral dominance to be subjugation akin to living under a dictatorship. But instead of viewing this phenomenon from the perspective of John Q. Public, view it from within the brain of a policymaker or decisionmaker.

For example, suppose you are a completely fictional Chairman of a completely hypothetical Federal Reserve Board. We will call you “Bernanke.” During a long period of absurdly low interest rates, a huge speculative boom has produced unprecedented levels of real-estate investment by banks and near-banks. After stoutly insisting for years on the benign nature of this activity, you suddenly perceive the likelihood that this speculative boom will go bust and some indeterminate number of these financial institutions will become insolvent. What do you do? 

Actually, the question is really more “What do you say?” The actions of the Federal Reserve in regulating banks, including those threatened with or undergoing insolvency, are theoretically set down on paper, not conjured up extemporaneously by the Fed Chairman every time a crisis looms. These days, though, the duties of a Fed Chairman involve verbal reassurance and massage as much as policy implementation. Placing those duties in their proper light requires that our side trip be interrupted with a historical flashback.

Let us cast our minds back to 1929 and the onset of the Great Depression in the United States. At that time, virtually nobody foresaw the coming of the Depression – nobody in authority, that is. For many decades afterwards, the conventional narrative was that President Herbert Hoover adopted a laissez faire economic policy, stubbornly waiting for the economy to recover rather than quickly ramping up government spending in response to the collapse of the private sector. Hoover’s name became synonymous with government passivity in the face of adversity. Makeshift shanties and villages of the homeless and dispossessed became known as “Hoovervilles.”

It took many years to dispel this myth. The first truthteller was economist Murray Rothbard in his 1962 book America’s Great Depression, who pointed out that Hoover had spent his entire term in a frenzy of activism. Far from remaining a pillar of fiscal rectitude, Hoover had presided over federal deficit spending so large that his successor, Democrat Franklin Delano Roosevelt, campaigned on a platform of balancing the federal-government budget. Hoover sternly warned corporate executives not to lower wages and officially adopted an official stance in favor of inflation.

Professional economists ignored Rothbard’s book in droves, as did reviewers throughout the mass media. Apparently the fact that Hoover’s policies failed to achieve their intended effects persuaded everybody that he couldn’t have actually followed the policies he did – since his actual policies were the very policies recommended by mainstream economists to counteract the effects of recession and Depression and were largely indistinguishable in kind, if not in degree, from those followed later by Roosevelt.

The anathematization of Herbert Hoover drover Hoover himself to distraction. The former President lived another thirty years, to age ninety, stoutly maintaining his innocence of the crime of insensitivity to the misery of the poor and unemployed. Prior to his presidency, Hoover had built reputation as one of the great humanitarians of the 20th century by deploying his engineering and organizational skills in the cause of disaster relief across the globe. The trashing of his reputation as President is one of history’s towering ironies. As it happened, his economic policies were disastrous, but not because he didn’t care about the people. His failure was ignorance of economics – the same sin committed by his critics.

Worse than the effects of his policies, though, was the effect his demonization has had on subsequent policymakers. We do not remember the name of the captain of the California, the ship that lay anchored within sight of the Titanic but failed to answer distress calls and go to the rescue. But the name of Hoover is still synonymous with inaction and defeat. In politics, the unforgivable sin became not to act in the face of any crisis, regardless of the consequences.

Today, unlike in Hoover’s day, the Chairman of the Federal Reserve Board is the quarterback of economic policy. This is so despite the Fed’s ambiguous status as a quasi-government body, owned by its member banks with a leader appointed by the President. Returning to our hypothetical, we ponder the dilemma faced by the Chairman, “Bernanke.”

Bernanke only directly controls monetary policy and bank regulation. But he receives information about every aspect of the U.S. economy in order to formulate Fed policy. The Fed also issues forecasts and recommendations for fiscal and regulatory policies. Even though the Federal Reserve is nominally independent of politics and from the Treasury department of the federal government, the Fed’s policies affect and are affected by government policies.

It might be tempting to assume that Fed Chairmen know what is going to happen in the economic future. But there is no reason to believe that is true. All we need do is examine their past statements to disabuse ourselves of that notion. Perhaps the popping of the speculative bubble that Bernanke now anticipates will produce an economic recession. Perhaps it will even topple the U.S. banking system like a row of dominoes and produce another Great Depression, a la 1929. But we cannot assume that either. The fact that we had one (1) Great Depression is no guarantee that we will have another one. After all, we have had 36 other recessions that did not turn into Great Depressions. There is nothing like a general consensus on what caused the Depression of the 1920s and 30s. (The reader is invited to peruse the many volumes written by historians, economic and non-, on the subject.) About the only point of agreement among commentators is that a large number of things went wrong more or less simultaneously and all of them contributed in varying degrees to the magnitude of the Depression.

Of course, a good case might be made that it doesn’t matter whether Fed Chairman can foresee a coming Great Depression or not. Until recently, one of the few things that united contemporary commentators was their conviction that another Great Depression was impossible. The safeguards put in place in response to the first one had foreclosed that possibility. First, “automatic stabilizers” would cause government spending to rise in response to any downturn in private-sector spending, thereby heading off any cumulative downward movement in investment and consumption in response to failures in the banking sector. Second, the Federal Reserve could and would act quickly in response to bank failures to prevent the resulting reverse-multiplier effect on the money supply, thereby heading off that threat at the pass. Third, bank regulations were modified and tightened to prevent failures from occurring or restrict them to isolated cases.

Yet despite everything written above, we can predict confidently that our fictional “Bernanke” would respond to a hypothetical crisis exactly as the real Ben Bernanke did respond to the crisis he faced and later described in the book he wrote about it. The actual and predicted responses are the same: Scare the daylights out of the public by predicting an imminent Depression of cataclysmic proportions and calling for massive government spending and regulation to counteract it. Of course, the real-life Bernanke claimed that he and Treasury Secretary Henry O’Neill correctly foresaw the economic future and were heroically calling for preventive measures before it was too late. But the logic we have carefully developed suggests otherwise.

Nobody – not Federal Reserve Chairmen or Treasury Secretaries or California psychics – can foresee Great Depressions. Predicting a recession is only possible if the cyclical process underlying it is correctly understood, and there is no generally accepted theory of the business cycle. No, Bernanke and O’Neill were not protecting America with their warning; they were protecting themselves. They didn’t know that a Great Depression was in the works – but they did know that they would be blamed for anything bad that did happen to the economy. Their only way of insuring against that outcome – of buying insurance against the loss of their jobs, their professional reputations and the possibility of historical “Hooverization” – was to scream for the biggest possible government action as soon as possible. 

Ben Bernanke had been blasé about the effects of ultra-low interest rates; he had pooh-poohed the possibility that the housing boom was a bubble that would burst like a sonic boom with reverberations that would flatten the economy. Suddenly he was confronted with a possibility that threatened to make him look like a fool. Was he icy cool, detached, above all personal considerations? Thinking only about banking regulations, national-income multipliers and the money supply? Or was he thinking the same thought that would occur to any normal human being in his place: “Oh, my God, my name will go down in history as the Herbert Hoover of Fed chairmen”?

Since the reasoning he claims as his inspiration is so obviously bogus, it is logical to classify his motives as personal rather than professional. He was protecting himself, not saving the country. And that brings us to the case of Chief Justice John Roberts.

Chief Justice John Roberts: Selfless, Self-Interested or Self-Preservationist?

For centuries, economists have identified self-interest as the driving force behind human behavior. This has exasperated and even angered outside observers, who have mistaken self-interest for greed or money-obsession. It is neither. Rather, it merely recognizes that the structure of the human mind gives each of us a comparative advantage in the promotion of our own welfare above that of others. Because I know more about me than you do, I can make myself happier than you can; because you know more about you than I do, you can make yourself happier than I can. And by cooperating to share our knowledge with each other, we can make each other happier through trade than we could be if we acted in isolation – but that cooperation must preserve the principle of self-interest in order to operate efficiently.

Strangely, economists long assumed that the same people who function well under the guidance of self-interest throw that principle to the winds when they take up the mantle of government. Government officials and representatives, according to traditional economics textbooks, become selfless instead of self-interested when they take office. Selflessness demands that they put the public welfare ahead of any personal considerations. And just what is the “public welfare,” exactly? Textbooks avoided grappling with this murky question by hiding behind notions like a “social welfare function” or a “community indifference curve.” These are examples of what the late F. A. Hayek called “the pretense of knowledge.”

Beginning in the 1950s, the “public choice” school of economics and political science was founded by James Buchanan and Gordon Tullock. This school of thought treated people in government just like people outside of government. It assumed that politicians, government bureaucrats and agency employees were trying to maximize their utility and operating under the principle of self-interest. Because the incentives they faced were radically different than those faced by those in the private sector, outcomes within government differed radically from those outside of government – usually for the worse.

If we apply this reasoning to members of the Supreme Court, we are confronted by a special kind of self-interest exercised by people in a unique position of power and authority. Members of the Court have climbed their career ladder to the top; in law, there are no higher rungs. This has special economic significance.

When economists speak of “competition” among input-suppliers, we normally speak of people competing with others doing the same job for promotion, raises and advancement. None of these are possible in this context. What about more elevated kinds of recognition? Well, there is certainly scope for that, but only for the best of the best. On the current court, positive recognition goes to those who write notable opinions. Only Judge Scalia has the special talent necessary to stand out as a legal scholar for the ages. In this sense, Judge Scalia is “competing” with other judges in a self-interested way when he writes his decisions, but he is not competing with his fellow judges. He is competing with the great judges of history – John Marshall, Oliver Wendell Holmes, Louis Brandeis, and Learned Hand – against whom his work is measured. Otherwise, a judge can stand out from the herd by providing the deciding or “swing” vote in close decisions. In other words, he can become politically popular or unpopular with groups that agree or disagree with his vote. Usually, that results in transitory notoriety.

But in historic cases, there is the possibility that it might lead to “Hooverization.”

The bigger government gets, the more power it wields. More government power leads to more disagreement about its role, which leads to more demand to arbitration by the Supreme Court. This puts the Court in the position of deciding the legality of enactments that claim to do great things for people while putting their freedoms and livelihoods in jeopardy. Any judge who casts a deciding vote against such a measure will go down in history as “the man who shot down” the Great Bailout/the Great Health Care/the Great Stimulus/the Great Reproductive Choice, ad infinitum.

Almost all Supreme Court justices have little to gain but a lot to lose from opposing a measure that promotes government power. They have little to gain because they cannot advance further or make more money and they do not compete with J. Marshall, Holmes, Brandeis or Hand. They have a lot to lose because they fear being anathematized by history, snubbed by colleagues, picketed or assassinated in the present day, and seeing their children brutalized by classmates or the news media. True, they might get satisfaction from adhering to the Constitution and their personal conception of justice – if they are sheltered under the umbrella of another justice’s opinion or they can fly under the radar of media scrutiny in a relatively low-profile case.

Let us attach a name to the status occupied by most Supreme Court justices and to the spirit that animates them. It is neither self-interest nor selflessness in their purest forms; we shall call it self-preservation. They want to preserve the exalted status they enjoy and they are not willing to risk it; they are willing to obey the Constitution, observe the law and speak the truth but only if and when they can preserve their position by doing so. When they are threatened, their principles and convictions suddenly go out the window and they will say and do whatever it takes to preserve what they perceive as their “self.” That “self” is the collection of real income, perks, immunities and prestige that go with the status of Supreme Court Justice.

Supreme Court Justice John Roberts is an example of the model of self-preservation. In both of the ObamaCare decisions, his opinions for the majority completely abdicated his previous conservative positions. They plumbed new depths of logical absurdity – legal absurdity in the first decision and semantic absurdity in the second one. Yet one day after the release of King v. Burwell, Justice Roberts dissented in the Obergefell case by chiding the majority for “converting personal preferences into constitutional law” and disregarding clear meaning of language in the laws being considered. In other words, he condemned precisely those sins he had himself committed the previous day in his majority opinion in King v. Burwell.

For decades, conservatives have watched in amazement, scratching their heads and wracking their brains as ostensibly conservative justices appointed by Republican presidents unexpectedly betrayed their principles when the chips were down, in high-profile cases. The economic model developed here lays out a systematic explanation for those previously inexplicable defections. David Souter, Anthony Kennedy, John Paul Stevens and Sandra Day O’Connor were the precursors to John Roberts. These were not random cases. They were the systematic workings of the self-preservationist principle in action.

DRI-275 for week of 8-24-14: The Movie Law of Inverse Relevance

An Access Advertising EconBrief:

The Movie Law of Inverse Relevance

Beginning in the late 1940s and early 50s, more Hollywood movies were made to push a polemical agenda or send a political message. Prior to that, the major Hollywood studios followed “Mayer’s Maxim.” Metro Goldwyn Mayer’s boss Louis B. Mayer is credited with the dictum: “When I want to send a message, I’ll call Western Union.” Mayer objected to “message movies” because he didn’t think they were good box office.

This space has taken a different tack, objecting to Hollywood message movies a posteriori, doubting not their entertainment value but rather their veracity. The problem is that Hollywood producers, directors and screenwriters cannot keep their thumbs off the scales. Since reality stubbornly refuses to accommodate itself to their warped vision, they film their “true stories” by lying about the facts in order to satisfy the audience and themselves simultaneously. The problem is so endemic that the only safe approach is for viewers to assume that filmmakers are lying until proven otherwise.

This tempts us to the conclusion that truth and movies are mutually exclusive. We’re congenitally suspicious of entertainment-oriented Hollywood films. For example, we know that action movies defy the laws of physics and suspense movies end happily whereas real-life suspense often does not. If movies that advertise “This is a true story” are almost certainly lying to us, where can we hope to find a semblance of reality?

The surprising answer is that some of the most entertaining movies from Hollywood’s Golden Age, movies made with no apparent thought for social relevance, occasionally offer stunningly accurate illustrations of history and economics. This forms the basis for an empirical dictum called the Movie Law of Inverse Relevance: The more entertaining the movie, the greater the likelihood of encountering truth within it; the more socially conscious the movie, the less likely it is to be true.

Boom Town: More Than Just Another Hollywood Potboiler

Oil has been the lifeblood of life on Earth for over a century. You’d never know it from depictions of the oil business on screen, which have tended to treat petroleum as a commodity freighted with tragedy and the oil business as populated by psychotics. Yet it was not ever thus.

The 1940 movie Boom Townwas one of the biggest box-office movies in the year after Hollywood’s legendary year of 1939. It starred Clark Gable, the “King of Hollywood,” and Spencer Tracy, winner to consecutive Best Actor Academy Awards in 1937 and 1938. The female lead, Claudette Colbert, had teamed with Gable in 1934’s It Happened One Night, the first film ever to win Academy Awards in the five major categories – Best Picture, Best Actor, Best Actress, Best Director and Best Screenplay. This was their “reunion” film, long-awaited by movie audiences throughout America. As if this blockbuster combination of stars weren’t enough to assure the film’s success, they were joined by Hedy Lamarr, perhaps the most beautiful woman in the world, and Frank Morgan, a scene-stealing character actor and eventual Oscar nominee in both the Best Actor and Best Supporting Actor categories.

The movie’s formidable assemblage of talent was enough to lure people into the theaters and keep them in their seats. But the script, by Gable’s favorite screenwriter, John Lee Mahin – based on a story by another Gable favorite, James Edward Grant – told more than the usual Hollywood tall tale. It told a true story of the oil business and the men who made it work – and a government that tried to torpedo it.

The Plot

The time is 1912. The place is a dusty Texas town called Burkburnett, which some spring rains have turned into a mudhole. Two men are crossing the muddy street from opposite directions on a narrow, rickety bridge of planks built from two-by-fours. They meet in the middle. The tall one (Gable) addresses the other (Tracy) as “Shorty” and cordially invites him to stand aside, knowing this would entail a side trip into the mud. This meets with a stony refusal. The two trade insults and the impasse is about to escalate into fisticuffs – then gunfire splits the air when a man flees the nearby saloon with a deputy sheriff in hot pursuit. The two men abandon their dignity and leap head-first into the mud rather than risk meeting a stray bullet.

Thus is born a famous friendship between “Big John” McMasters and “Square John” Sand. The two share more than a first name. They are both wildcat oil prospectors, freshly arrived in town thanks to the discovery of oil that has turned a tiny Texas fly-speck into a legendary boom town. They have both staked out a likely looking stretch of ground outside of town. They pool their meager assets and find they lack sufficient funds to purchase drilling equipment and supplies. McMasters allows Sand to choose the precise spot to “sput in” (drill) but promises to produce the necessary materiel. At his urging, the two stage a skit to deceive a local equipment dealer, Luther Aldrich (Frank Morgan) into supplying the necessary stuff in exchange for a small share in their well which, they assure him confidently, is a sure thing to succeed.

The well fails. Sand reluctantly admits that McMasters’ choice of drilling location would have been better. Now the pair must raise their roll again – after first fleeing town one jump ahead of that same sheriff’s deputy whose bullets they had earlier dodged, one Harmony Jones (character actor Chill Wills). The film skillfully uses montage to concisely depict the succession of odd jobs and travails that eventually takes them back to Burkburnett. They have enough money to pay for tools and equipment now, but not enough to pay off the debt for their previous dry hole.

Undaunted, the two bluff their way past Luther Aldrich a second time. They’d be crazy to try the same routine on him again, wouldn’t they? This time they’ve really got a sure thing, and they’ll increase his stake as an incentive to agree to an ownership share against what they owe. Luther is imprudent enough to agree, but not completely crazy; he dispatches Harmony as a security guard over their claim to make sure they don’t run out on him a second time. McMasters gives Sand the naming rights over their claim and Sand chooses “Beautiful Betsy” in honor of the girl he left behind back East.

As the drilling progresses, the restless McMasters leaves Sand on duty at the rig one Saturday night and goes into town to relieve the monotony. He bumps into a proper Eastern girl (Claudette Colbert) who has journeyed to Burkburnett to meet a friend. She and McMasters experience the classic Hollywood “love at first sight” evening. By morning, they are married. Sand returns to their boarding house to break the news that their gusher has come in and the time-honored plot device of unknown identity unfolds – Colbert is Betsy Bartlett, the woman Sand is expecting to marry, while Sand is Betsy’s best-friend-who-she-doesn’t-feel-that-way-about. McMasters, in true Gable fashion, steps forward and invites Sand to take a poke at him. But Sand quietly asks Betsy if McMasters is the man she really wants. Upon verifying the truth, he calmly leaves the scene, implicitly giving the two his blessing. “Honey,” McMasters concludes admiringly, “that is a man.”

The movie’s next few minutes set the scene for the rest of the film. The audience learns that McMasters’ love for Betsy is true but equaled by his love of the chase and conquest. Betsy’s real rival is not other women but oil; women only tempt McMasters when he is tied down and prevented from exercising his talent for serial exploration and exploitation of oil. And Sand remains faithful to Betsy, his romantic ardor now sublimated into friendship. The movie resolves into the kind of romantic triangle that only Hollywood could dream up. McMasters and Sand make and lose a succession of fortunes and their friendship is broken and mended repeatedly. The cause of these episodes is Betsy; Sand will not allow McMasters to abuse Betsy’s love.

When McMasters meets the illegally lovely Karen Vanmeer (Hedy Lamarr), the two are drawn to each other. Vanmeer is a skilled business analyst who wants to acquire McMasters in a hostile takeover from his wife. Sand won’t permit it. He proposes marriage to Vanmeer and offers her lavish financial terms including a draconian divorce settlement that would enrich her. Astonished, she mutters, “I see. Greater love hath no man than…”

Eventually, the long-delayed fisticuffs between McMasters and Sand explode. The movie culminates in a battle over control of the oil business.

The plot summary highlights the entertainment value of Boom Town. It says nothing about the movie’s contributions to our understanding of history and economics.

Boom Townas History

There is no narrative or visual prelude assuring us that “this is a true story.” Nevertheless, there is no movie that tells the story of wildcat oil exploration and drilling in the early 20th century as vividly and truthfully as Boom Town. Burkburnett was a real Texas boom town where oil was discovered in 1912. The discovery turned the town upside down in just the manner portrayed in the movie.

How many movies shown today are as relevant to life today? The Burkburnett of 1912 is uncannily like parts of Texas and North Dakota today – scruffy, muddy, starved of infrastructure, crowded with roughnecks, troubled with petty crime but bursting at the seams with opportunity and unbridled vitality. Both today and a century ago, this was a frontier region – not in the geographic sense but in an economic sense. This was entrepreneurship at its most raw and visceral, not something out of business school.

Perhaps the most neglected feature of Boom Townis the role played by this scenic backdrop. The movie is so dominated by its multiple stars and impeccable supporting cast that the audience is unconscious of the background. We feel it acutely nonetheless. The critic James Shelley Hamilton wrote long ago of the elements that make up “the feet a movie walks on.” Boom Town owes its jaunty strut to its brilliantly observed picture of the life of an oil town, whether in Texas, Oklahoma, Pennsylvania, California or Central America.

Boom Town as Economic Theory and Logic

Boom Townshould be shown in university courses on economic history and theory. We could leaf diligently through reference sources like Halliwell’s Film and Video Guide or Leonard Maltin’s Movie Guide without encountering another movie so rich in economic meaning.

The physical, geologic circumstances of petroleum evolution and extraction create an age-old problem of economic investment and consumption. In the movie’s final third, McMasters discovers that the refining of oil offers even more scope for entrepreneurial skill and profit than does exploration and production. Characteristically, he charges into the market full-bore, determined to risk going down in flames in order to become a leader. He forms a partnership with wily veteran Harry Compton (character actor Lionel Atwill). But when Sand and McMasters feud over the latter’s treatment of Betsy, Sand enlists Compton in an effort to break McMasters by double-crossing him. In retaliation, McMasters calls on his countless contacts among the country’s small wildcatters, persuading them to forsake the partnership of Compton and Sand and sell their oil to him instead.

McMasters uses an argument that must have seemed obscure to most movie audiences – and probably still does. But knowledgeable industry observers and economists will recognize within it a time-honored conundrum. “Sand will make you force-pump your wells,” he insists to the wildcatters. “Pretty soon you’ll be looking at dry holes. Go with me and I’ll keep you pumping years longer.” Hollywood was – and still is – famous for dishing out all manner of baloney in the service of its plots. But this wasn’t the usual nonsense.

According to orthodox geological theory, petroleum is created by fossilized deposits that crystallize in the ground over many millennia. These deposits eventually liquefy and congregate in underground reservoirs called “traps.” That term is particularly apt when the liquid is literally trapped within rocks like the shale or sandstone that now supplies much of the oil being produced in the northern United States and Canada. Oil exploration has traditionally consisted of the location, identification and confirmation of these traps.

But just locating oil isn’t enough; that’s just the beginning of the process. Getting the oil out of the ground was no picnic in the early 20th century. Drilling holes in the ground using percussive methods – e.g.; knocking holes with heavy machines – enables the oil to be reached and exhumed. Raising it to the surface isn’t like dropping a dipper in a pail of water lifting it to your lips. It takes great physical persuasion to accomplish. McMasters’ use of the term “force-pumping” referred to the practice of pumping compressed air down the drilling shaft to force the oil to the surface. This term involved a certain amount of time, trouble and danger. But the worst thing about it was the tradeoff it implied. Its use eventually made the trap unproductive – not because the oil was fully extracted but instead because the remaining oil could no longer be withdrawn from the ground. Given the technology currently in use, it was stuck there. We know it was there, or at least those in the know did. But it didn’t count as “reserves,” because “proven reserves” only consisted of oil that was actually extractable. Depending on particular circumstances, this might be anywhere from 30% to 60% of the original petroleum deposit in the trap.

These facts of geologic and economic life are particularly germane today. The U.S. economy today is getting a shot in the arm from oil exploration and production in Texas and North Dakota, not to mention the oil coming from our longtime leading supplier to the north, Canada. Strictly speaking, this oil comes not from “new” discoveries but from long-existing fields and rigs that only recently became economically useful. New techniques of “enhanced recovery” like horizontal drilling (over fifty years old but newly profitable) and “fracking” have given these sources a new lease on life – which aptly describes the effect the oil has had on the America economy.

The wildcatters McMasters and Sand fought over faced a classic economic dilemma. They could pump more oil now and a lot less later or pump somewhat less now and somewhat more in the future. Sand himself alludes to this in courtroom testimony by calling McMasters a “conservationist… although he didn’t know it.” We are taught – conditioned is a better word – to view “conservation” as a good thing, as the antonym of “waste.” That is simply not true, though. There is no inherent, technological logic of efficiency that allows us to prefer consumption in the future to consumption now; only human preferences and purposes can resolve this issue.

That is where the interest rate enters the picture. Interest rates balance the supply of saving funds and the demand for investment funds – that is, the desires of those who want to consume more in the future and those who want to produce things to be consumed in the future. In pure theory, there is an optimal rate of extraction for natural resources such as petroleum that depends on the level of interest rates. Relatively low interest rates suggest that people want to consume lots in the future and that we should economize on consumption now and concentrate on production for the future. High interest rates encourage current consumption and discourage saving and investment geared toward the future.

The movie presents conservation in a whole new lightas governed by economics. Boom Towndoesn’t present this relatively sophisticated analysis explicitly; it just treats McMasters as a hero for promoting “conservation.” The implications of this, however, are unprecedented.

For one thing, Sand suggests that McMasters is acting entirely in pursuit of his own profit, yet his actions promote the general interest. That is, he is providing an operational definition of Adam Smith’s famous invisible hand at work. Celebrations of Adam Smith in Hollywood movies occur roughly as often as Halley’s Comet visits our solar system. For another, conservation in the movies is practiced by environmentalists or mavericks or nut jobs that are portrayed as really smarter than successful people – but never by successful businessmen. In 1940 as today, businessmen weren’t allowed to act nobly or altruistically within the framework of a movie unless they were portrayed as deliberately scorning profit.

Compton matter-of-factly uses the antitrust laws as a tool to harm his competitor, McMasters, thus serving his own business advantage. When Compton (Atwill) muses, “I wonder what the federal government would say about McMasters’ activities…,” and we then witness McMasters’ trial for violating the provisions of the Sherman Antitrust Act, it is a seminal movie moment. It would be over twenty years before radical historian Gabriel Kolko would advance his famous theory of “regulatory capture,” which was eventually co-opted by the right wing as a key plank in its opposition to the regulatory state. Kolko’s research showed that the first great regulatory initiative, the Interstate Commerce Commission (ICC) in 1887, was ushered in by the corporate railroad interests it ostensibly was created to regulate. The railroad business was beset by the age-old bugbear of industries with high fixed costs and low variable costs: price wars among competitors. The ICC cartelized the industry by raising prices and ending the price wars. Subsequent research has shown that antitrust enforcement has specialized in suppressing competition by concentrating on protecting competitors from competitive damage rather than safeguarding the competitive process itself.

McMasters successfully persuades wildcatters to forsake Compton and Sand in his favor. Yet his actions are criminalized as “monopolization.” It is true that orthodox economic theory describes a monopolist as one who “restricts” output in his own interest. But his ability to do that derives from restrictions on entry into the industry. The oil business is legendary for the absence of just those restrictions; indeed, that is what Boom Town is all about. Even the smallest wildcatter, whose fraction of total oil output is so tiny as to foreclose any influence on the market price of oil, still faces a problem of optimizing the time structure of oil extraction and sale. This problem is absent from orthodox theory only because that theory is timeless; it foolishly treats production and consumption as though occurring simultaneously in a single timeless instant.

In the event, the movie and the jury both vindicate McMasters by finding him innocent of monopolization. Unfortunately, he has spent so much money in his legal defense that he is now broke again, for what seems the umpteenth time. And this is yet another sophisticated economics lesson: somebody can be right and win in court, yet still be defeated by the magnitude of legal expenses.

Entertainment Wins Out in the End – as Usual

We are seemingly set up for a downbeat ending. But not in 1940, not when Clark Gable, Spencer Tracy and Claudette Colbert are heading the cast. At the fadeout, we find ourselves on a California hillside, overlooking a valley. McMasters, Betsy and Harmony and broke but happy, living out of a trailer and working the one small section of oil property that McMasters has left after his devastating brush with antitrust law. Who should come wandering over the hill but Luther Aldrich and John Sand? Aldrich has persuaded Sand to invest in the property as a devious scheme to reunite the old partners. Grudging at first, they spar over where the oil structure is located and where the rig will sput in. They turn their aggressive humor on their old target; Aldrich will naturally float them the tools and equipment in exchange for an ownership share in the property, in lieu of cash payment. “Oh, no!” Aldrich exclaims. “You two go broke on your own this time. There’s a dry hole in every foot of this place.”

As the background music score swells, the four principals stroll arm in arm toward the camera, grinning happily. “What’s the name of this sucker’s paradise?” demands Aldrich. “They named it after some old guy called Kettelman,” McMasters explains nonchalantly. “They call it ‘Kettleman Hills.'”

“Kettleman Hills?” Aldrich scoffs. “Doesn’t even sound like oil.”

The 1940 movie audience knew what today’s audience, for whom American history is a lost pastime, never learned. The gigantic Kettleman Hills discovery was one of the greatest oil booms of its day. McMasters, Sand, Aldrich and Betsy will soon be richer than ever. It’s happy-ending time for the cast of Boom Town.

The Moral

Metro Goldwyn Mayer never set out to make Boom Town a “relevant” movie, slake an executive’s social conscience or satisfy a star’s altruistic longings. If anybody associated with the project sensed its historical or economic uniqueness, it was a well-guarded secret. Its singular goal was entertainment, one that it fulfilled admirably.

The bleached bones of failed socially conscious and message movies litter the pages of Variety and other trade publications. The lies told by the numerous “true stories” and exposes await exposure by an investigator with the intestinal and anatomical fortitude for the job. Buried within the boundless entertainment of gems like Boom Town are the real lessons Hollywood can teach us about economic history and theory, freedom and free enterprise.

The relationship between socially relevant pretension and truth in movies is inverse. The more relevance, the less value; the less relevance and the more entertainment, the more truth.

DRI-322 for week of 4-6-14: How the Dead Hand of Regulation Is Holding Back the Future

An Access Advertising EconBrief:

How the Dead Hand of Regulation Is Holding Back the Future

Self-Driving Cars: What’s the Hurry? 30,000 Annual Deaths are Nothing to Get Excited About

Self-driving cars are automobiles that drive themselves. This is possible because they possess a system of sensors and computer programs that perform the basic driving functions of starting, shifting, steering, navigation, “seeing” obstacles and avoiding them, “observing” (coded) traffic and geographic signage and stopping. Most people are aware that Google has built a fleet of self-driving cars. Many people know that self-driving cars have been extensively tested, not only on private courses but also on public roads in states such as California. Some people know that self-driving cars have had no accidents during these tests.

It would seem that these facts have enormous significance. Currently, deaths due to motor-vehicle accidents constitute the leading cause of accidental death in the United States. In 2012, the most recent year for which complete data are available, over 34,000 people died on U.S. roads from motor-vehicle accidents. (This was an increase from the 2011 total of 32,000+, for which the figure of 1.10 deaths per million vehicle miles travelled was an all-time low since this safety statistic was first measured in 1921.) This does not count an additional 2,000+ pedestrians and motorcyclists who also died due to accidents in which motor vehicles were implicated.

Combine the information in the first paragraph of this section with the information in the second paragraph. This amalgamation is tantamount to saying that a disease epidemic currently kills over 30,000 people yearly, and we have a nearly foolproof cure for the disease. And we are doing virtually nothing to implement that cure.

In this case, the “cure” entails making the necessary changes in infrastructure and law to allow self-driving vehicles (SDVs) to operate in the U.S. Whether SDVs are or are not ready for mass adoption tomorrow or the next day is irrelevant – at the moment, we couldn’t adopt them even if they were ready for prime time. What we should be doing is paving their way (no pun intended) so that when all their bugs have been exterminated, we can put SDVs into use post haste.

The federal government has assumed the role of safety czar for the nation. Superficially, one would expect federal agencies to be making rules, suggesting law changes and beating the drums for the dawning new era in American transportation in the same manner as (say) they have been propagandizing for Obamacare.

Instead, this is how the federal the federal government has reacted to the prospect of self-driving cars:

“The National Highway Traffic Safety Administration (NHTSA) does not recommend that states authorize the operation of self-driving vehicles for purposes other than testing at this time.” The NHTSA, as its name implies, is the agency within the U.S. Department of Transportation whose specific mandate is traffic safety. Yet, incredible as it seems, NHTSA not only is not proceeding full speed ahead with plans for the future of self-driving cars – it recommends that states do not authorize their general use in spite of the fact that states are doing just that.

Uh…what does NHTSA recommend that states do about self-driving cars, then? “NHTSA recommends that states require issue separate driver licenses, or at least special driver-license endorsements, for those who wish to operate autonomous vehicles.” A licensure requirement is a classic example of what economists call a “barrier to entry” into an activity. In other words, the NHTSA is trying to make it harder for people to drive SDVs.

The Secretary of the Department of Transportation, Ray LaHood, had this to say about his department’s policy on SDVs: “…Our top priority is to ensure these vehicles and their occupants are safe.” Picture this hypothetical scenario: We are suffering an epidemic in which tens of thousands of people die every year. Some people step forward and volunteer to test a vaccine that will almost certainly cure the disease that causes the epidemic. Suddenly a Federal Cabinet head steps forward with hand upraised to place controls on the testing process. “Our top priority is to make sure this vaccine and these test volunteers are safe.” No, dummy! Your top priority is to keep the nation safe! Thousands of people are dying every year! If a few people have to endure a slight risk to eliminate those deaths, your job is to get out of the way and let that happen as quickly as possible!

That hypothetical scenario is an excellent analogy to the status of SDVs today.

At this point, the reader must be shaking his head in utter disbelief. Are these people crazy? Are they completely unaware of the progress of SDVs? Oh, no. NHTSA goes on to blandly admit that “self-driving cars are seen as having the potential to save many thousands of lives annually by avoiding deadly crashes caused by human error, the reason for the vast majority of auto accidents.” This means that NHTSA is placing roadblocks in the path of SDVs while knowing full well their lifesaving potential. In other words, the NHTSA will save those thousands of lives when it is good and ready or, as the late Orson Welles might put it, the NHTSA will save no lives before its time.

NHTSA to Americans: Drop dead.

Just in case the full picture isn’t clear by now, the NHTSA currently has power to affect the lives of virtually American and control the activities of all drivers. SDVs have the potential to leave the NHTSA with nobody to regulate, since there would be virtually no safety issues left other than purely mechanical ones that would gradually fall almost to zero. (But you can be sure that the agency will fight tooth and claw to retain safety regulation of SDVs anyway, to justify its existence in downsized form.)

NHTSA wants to set up its own tests for SDVs. In fact, these “tests” will be used to delay the progress of SDVs as long as possible. As precedent for this prediction, we can cite the long-drawn out deregulation and subsequent technological revolution in telecommunications, much delayed in America compared to many other countries.

 

“Someday All Planes Will Be Drones”

Ask the average American what he or she knows about “drones” and chances are the reply will focus on pilotless aircraft controlled by a military operator on the ground and used in Middle Eastern countries to assassinate terrorists. In a way, this is fitting, since drones began as military weapons over half a century ago.

The word “drone” connotes a mindless worker performing rote tasks, in the manner of worker bees. When mechanical, drones are under the control of a human operator. The earliest drones were developed for military-intelligence purposes in the late 1950s. When Francis “Gary” Powers was shot down while piloting a U-2 high-altitude spy plane in 1961, the U.S. military began substituting pilotless craft for U-2s to avoid incurring propaganda setbacks from the capture of live prisoners.

Drone technology was perfected in successive wars from Vietnam to Kuwait to Afghanistan to Iraq. Today drones are anything but an embryonic innovation, full of kinks and bugs. It is long past time for their debut in commerce. Amazon’s Jeff Bezos recently attracted attention with a plan to deliver packages via drones. Speculation about the fate of Malaysian Airlines Flight 370 has raised the possibility of pilotless commercial airliners. After all, the most common cause of airline crash, as with automobiles, is pilot error. The general public is barely aware of the fact that most basic functions of commercial airline flight have already been automated.

Whenever scientific innovation threatens to make the world a better place, government regulators can be relied upon to build barriers to progress. This sounds highly pejorative to most people, yet it is really a perfectly logical state of affairs. Entrepreneurs and business owners use scientific innovations to make our lives better, not necessarily because they long to help us but because the only way they can profit is if we approve of their business decisions. Government regulators cannot earn profits and they do not benefit personally from making our lives better by (say) improving workplace safety or blocking a dangerous drug or process from coming to market. Consequently, they strive to improve their own welfare by maximizing government budgets and payrolls and minimizing risk of public-relations disaster. They do that by strangling innovation and risk-taking by the private sector.

Aren’t government regulators members of the society they regulate? If their decisions rebound to our disadvantage, don’t they lose by that, just as we do? Yes, but for any one regulatory decision there is only a small chance that the regulator’s consumption will be reduced markedly by restrictive regulation. But a too-favorable regulation that turns out wrong – a drug allowed on the market that later causes illness or death, for example – will kill the regulator’s career. And in the case of innovative technologies like self-driving cars, laissez-faire regulation will kill the entire regulatory agency or vastly reduce its scope. The political left has made its bones by insisting that corporations cheerfully kill their customers in pursuit of profits. In reality, it is obvious that government regulators are the ones who will send tens of thousands of Americans to their deaths annually rather than face the prospect of losing their regulated captives when self-driving cars replace human-driven ones.

Any doubts about the cogency of this analysis should be erased by consideration of federal regulatory policy regarding commercial drone use. While American businesses are lining up to use drones for various applications, this is the policy of the Federal Aviation Administration (FAA) on the commercial use of Unmanned Aircraft Systems (UAS):

“The first annual UAS Roadmap addresses future policies, regulations, technologies and procedures that will be required as UAS operations increase in the nation’s airspace.” “Policies, regulations, technologies and procedures” that will be “required?” This sounds as though the FAA plans to micromanage UAS by creating a thicket of bureaucratic rules that will slow the industry to a crawl. And sure enough: “The Joint Planning and Development Office (JPDO) have developed a comprehensive plan to safely accelerate the integration of civil UAS into the national airspace system.” To a professional economist, the words “comprehensive plan” specifically mean complete control by a central authority in the manner of the former Soviet Union’s GOSPLAN. The phrase “safely accelerate” has an Orwellian ring; it means that the government is going to slow down UAS development while pretending to move it forward with all deliberate speed.

We are now observing an excellent example of the FAA’s “safe acceleration” in action. The agency announced earlier this year that it would hold a public meeting on May 28, 2014 to “discuss the agency’s plans to establish a new unmanned aircraft system (UAS) center of excellence (COE).” The FAA considered 24 U.S. cities as candidates for sites to test the safety of UAS. It “considered geography, climate, location of ground infrastructure, research needs, airspace use, safety, aviation experience and risk” in selecting 6 test sites.

Wait a minute – if the military has been using drones for over a half-century, why do we need a civilian agency (presumably lacking the military’s expertise) to test the safety of the technology? Drones have already been interacting within civilian airspace in the course of performing their military duties, both inside the U.S. (in transit) and outside it (accomplishing their mission). The testing sites and center of excellence are a classic regulatory stall. (A bureaucratic rule thumb is that the more seriously a bureaucracy takes itself by employing elevated, obfuscatory rhetoric and lengthy acronyms, the less valid is its mission.)

Superficially, the stakes may seem lower than with SDVs. There are no 30,000 lives to be saved immediately by the commercialization of drone technology. But that is deceptive. If it is possible to deliver Amazon’s goods, it is also possible to deliver vital foods, fuels and medicines, too. Public attention has focused on the possible abuse of privacy by drones, but why not focus on the potential to enhance privacy and security by using drones? History supplies plenty of cases in which the ultimate uses of technology differed dramatically from their initial ones.

In an incisive Wall Street Journal column, Holman Jenkins pointed out that most routine functions of commercial aircraft have been automated already. “Someday all planes will be drones,” Jenkins said. Predictably, his words elicited indignant denials by airline pilots whose jobs were threatened by the prospect of drones. But Jenkins is right. It remains true that airliner accidents are (still) due predominantly to human error – the very thing that drones will eliminate.

The War on HIgh-Frequency and High-Speed Stock Trades

Technology is also in the forefront of the latest regulatory jihad waged against the financial community. This particular war is waged against traders of financial assets, particularly stocks. The erring traders are not buying or selling the wrong stocks; they are trading in the wrong way. At this point, things become confusing. At times, the traders are trading too often; that is, they are engaging in high-frequency trading. Other times, though, the traders are trading too fast, engaging in high-speed trading.

Do the two sound like the same thing? Well, if you think so, you’re in bad company, because regulators apparently think so, too. A little thought will show that this need not be so. Even in the old days of trades penciled on slips of paper and consummated via open outcry, it was possible to trade many times per day, although it was rather uncommon. Of course, high-speed trades were ushered in with computer technology and became dominant during the digital Internet era. But regulators have recently issued overwrought bulletins suggesting that they view these practices as equivalent shady practices.

“The FBI has developed fact patterns of potentially illegal trading,” announced one of these bulletins. This sounds ominous, to say the least. But “because high-speed trades are executed by computer programs, it is often more difficult to detect nefarious activity and to prove that it was executed intentionally.” This astounding qualifier is enough to send a knowledgeable analyst’s eyebrows flying off his forehead. Potentially illegal trading? What on earth could merit this denomination? Why does cybernetic origin cloud the issue of intention? After all, somebody had to program the computer. And why on earth does computer trading make it hard to detect wrongdoing? Was wrongdoing unheard of back in the primitive, pre-computer days? This sounds as if regulators want to demonize high-speed trading but lack evidence of any real wrongdoing – so they have to content themselves with hinting darkly that something funny must be going on.

This impression was reinforced by subsequent comments by an FBI spokesman, who cited “the practice of placing a group of trades… to create the false appearance of market activity.” But this kind of “churning” and allegations of it have been going on for a few centuries, since the days when trades were conducted outdoors under shade trees. The FBI purports to investigate “whether high-speed trading firms are engaging in insider trading by taking advantage of fast-moving market information unavailable to other investors.” Surely the FBI must be kidding. Since time immemorial, the slogan on Wall Street has been “buy on the rumor, sell on the news.” The idea has been precisely to move fast to take advantage of market information before it becomes generally known. If that constitutes insider trading ipso facto, then Wall Street might as well close up shop and go home.

But the FBI is really serious. “There are many people in government who are very focused on this and who are very concerned about it and who think it breaks the law.” The only thing missing seems to be a Ten Most Wanted Financial Traders List.

Grizzled veterans of financial markets feel an overwhelming sense of déjà vu at all this. They remember Richard Ney, for example. Ney was the young actor who starred alongside Greer Garson as her son in the 1941 Oscar-winning film Mrs. Miniver. The next year, Garson and Ney startled the film world by marrying. Ney’s film career fizzled out despite solid work in a few more films. After working in television, Ney became a Wall Street stockbroker and wrote bestselling exposes explaining why the stock market was rigged against the small, non-professional investor and in favor of proprietary trading firms.

Ney’s complaints were focused on the activities of specialists, people hired by the exchanges to insure that a market always existed for any listed stock. The specialist was required to take the other side of any trade for which either buyers or sellers were not soon forthcoming. As compensation for the potential financial inconvenience of playing this role, the specialist was accorded the benefit of a bid-ask spread; e.g., a kind of brokerage fee embodied in the differential between buying price and selling price. This size of this spread serves as a direct index of risk in the trade of the asset.

It is ironic that the advent of computer trading has consigned the specialist, if not quite to the fate of the dodo, at least to relative insignificance. How? Well, the whole purpose of specialists was to guarantee a liquid market, but computer trading has made practically everybody a potential trader. Not only that, but the presence of John Q. Public, Joe Doakes and Joe Sixpack in the stock market has meant that more trades are being done with a lower average size of each trade. And that happens to be the hallmark of high-frequency trading, as pointed out in a recent Wall Street Journal op-ed (“HIgh-Frequency Hyperbole,” WSJ, 4/2/2014) by two veteran money managers who are not themselves high-frequency or high-speed traders (Clifford Asness and Michael Mendelson).

So Ney’s bogeyman, the foe of the small investor, has now been put in his place by high-frequency computer stock trading. This doesn’t exactly sound like high-frequency trading is a threat to the public weal. Asness and Mendelson’s opinion of high-frequency trading is that “we think it helps us. It seems to have reduced our costs [by reducing bid-ask spreads] and …enable[s] us to manage more investment dollars.” In effect, say the pair, high-frequency traders have assumed the liquidity-provision function once provided by specialists and then inherited by the “market-makers” who succeeded them. But they do it cheaper and better and “competition forces them to pass most of the savings on to us investors.”

Of course, whenever interlopers come along to chip away at profits once earned by bigger, less competitive firms or individuals, the latter invariably cry bloody murder. That is what has happened here, and the screamers form the cheering public audience for the immorality play being cast by regulators.

But a receptive audience isn’t motivation enough. Why have the national police force (the FBI) been called out by security regulators to cope with this menace that is benefitting small investors and reducing trading costs for the market at large? Precisely because the success of the market threatens to leave regulators without anybody to regulate. If the market works so smoothly that specialists are unnecessary and bid-ask spreads become tiny, people will begin to wonder why the majestic edifice of securities regulation is required. Brokers are fast going the way of insurance salesmen; prospectuses can be found on the Internet and index funds are becoming a way of life. The SEC is going to have to create a threat to justify suppressing the technology that is making it as obsolete as other artifacts of the old days.

Once again, the pattern is familiar. Technology is steadily improving the lives of Americans across the country and government regulators are frantically trying to hold it back to keep from losing their jobs. And they are being aided by incumbents whose jobs and profits are threatened by the competitive innovations.

The Big Daddy of Regulation

The biggest, longest-lived and most pernicious regulator of them all is the Food and Drug Administration (FDA). The story of this behemoth merits an EconBrief all its own.

DRI-259 for week of 2-2-14: Kristallnacht for the Rich: Not Far-Fetched

An Access Advertising EconBrief:

Kristallnacht for the Rich: Not Far-Fetched

Periodically, the intellectual class aptly termed “the commentariat” by The Wall Street Journal works itself into frenzy. The issue may be a world event, a policy proposal or something somebody wrote or said. The latest cause célèbre is a submission to the Journal’s letters column by a partner in one of the nation’s leading venture-capital firms. The letter ignited a firestorm; the editors subsequently declared that Tom Perkins of Kleiner Perkins Caulfield & Byers “may have written the most-read letter to the editor in the history of The Wall Street Journal.”

What could have inspired the famously reserved editors to break into temporal superlatives? The letter’s rhetoric was both penetrating and provocative. It called up an episode in the 20th century’s most infamous political regime. And the response it triggered was rabid.

“Progressive Kristallnacht Coming?”

“…I would call attention to the parallels of fascist Nazi Germany to its war on its “one percent,” namely its Jews, to the progressive war on the American one percent, namely “the rich.” With this ice breaker, Tom Perkins made himself a rhetorical target for most of the nation’s commentators. Even those who agreed with his thesis felt that Perkins had no business using the Nazis in an analogy. The Wall Street Journal editors said “the comparison was unfortunate, albeit provocative.” They recommended reserving Nazis only for rarefied comparisons to tyrants like Stalin.

On the political Left, the reaction was less measured. The Anti-Defamation League accused Perkins of insensitivity. Bloomberg View characterized his letter as an “unhinged Nazi rant.”

No, this bore no traces of an irrational diatribe. Perkins had a thesis in mind when he drew an analogy between Nazism and Progressivism. “From the Occupy movement to the demonization of the rich, I perceive a rising tide of hatred of the successful one percent.” Perkins cited the abuse heaped on workers traveling Google buses from the cities to the California peninsula. Their high wages allowed them to bid up real-estate prices, thereby earning the resentment of the Left. Perkins’ ex-wife Danielle Steele placed herself in the crosshairs of the class warriors by amassing a fortune writing popular novels. Millions of dollars in charitable contributions did not spare her from criticism for belonging to the one percent.

“This is a very dangerous drift in our American thinking,” Perkins concluded. “Kristallnacht was unthinkable in 1930; is its descendant ‘progressive’ radicalism unthinkable now?” Perkins point is unmistakable; his letter is a cautionary warning, not a comparison of two actual societies. History doesn’t repeat itself, but it does rhyme. Kristallnacht and Nazi Germany belong to history. If we don’t mend our ways, something similar and unpleasant may lie in our future.

A Short Refresher Course in Early Nazi Persecution of the Jews

Since the current debate revolves around the analogy between Nazism and Progressivism, we should refresh our memories about Kristallnacht. The name itself translates loosely into “Night of Broken Glass.” It refers to the shards of broken window glass littering the streets of cities in Germany and Austria on the night and morning of November 9-10, 1938. The windows belonged to houses, hospitals, schools and businesses owned and operated by Jews. These buildings were first looted, then smashed by elements of the German paramilitary SA (the Brownshirts) and SS (security police), led by the Gauleiters (regional leaders).

In 1933, Adolf Hitler was elevated to the German chancellorship after the Nazi Party won a plurality of votes in the national election. Almost immediately, laws placing Jews at a disadvantage were passed and enforced throughout Germany. The laws were the official expression of the philosophy of German anti-Semitism that dated back to the 1870s, the time when German socialism began evolving from the authoritarian roots of Otto von Bismarck’s rule. Nazi officialdom awaited a pretext on which to crack down on Germany’s sizable Jewish population.

The pretext was provided by the assassination of German official Ernst vom Rath on Nov. 7, 1938 by a 17-year-old German boy named Herschel Grynszpan. The boy was apparently upset by German policies expelling his parents from the country. Ironically, vom Rath’s sentiments were anti-Nazi and opposed to the persecution of Jews. Von Rath’s death on Nov. 9 was the signal for release of Nazi paramilitary forces on a reign of terror and abduction against German and Austrian Jews. Police were instructed to stand by and not interfere with the SA and SS as long as only Jews were targeted.

According to official reports, 91 deaths were attributed directly to Kristallnacht. Some 30,000 Jews were spirited off to jails and concentration camps, where they were treated brutally before finally winning release some three months later. In the interim, though, some 2-2,500 Jews died in the camps. Over 7,000 Jewish-owned or operated businesses were damaged. Over 1,000 synagogues in Germany and Austria were burned.

The purpose of Kristallnacht was not only wanton destruction. The assets and property of Jews were seized to enhance the wealth of the paramilitary groups.

Today we regard Kristallnacht as the opening round of Hitler’s Final Solution – the policy that produced the Holocaust. This strategic primacy is doubtless why Tom Perkins invoked it. Yet this furious controversy will just fade away, merely another media preoccupation du jour, unless we retain its enduring significance. Obviously, Tom Perkins was not saying that the Progressive Left’s treatment of the rich is now comparable to Nazi Germany’s treatment of the Jews. The Left is not interning the rich in concentration camps. It is not seizing the assets of the rich outright – at least not on a wholesale basis, anyway. It is not reducing the homes and businesses of the rich to rubble – not here in the U.S., anyway. It is not passing laws to discriminate systematically against the rich – at least, not against the rich as a class.

Tom Perkins was issuing a cautionary warning against the demonization of wealth and success. This is a political strategy closely associated with the philosophy of anti-Semitism; that is why his invocation of Kristallnacht is apropos.

The Rise of Modern Anti-Semitism

Despite the politically correct horror expressed by the Anti-Defamation Society toward Tom Perkins’ letter, reaction to it among Jews has not been uniformly hostile. Ruth Wisse, professor of Yiddish and comparative literature at HarvardUniversity, wrote an op-ed for The Wall Street Journal (02/04/2014) defending Perkins.

Wisse traced the modern philosophy of anti-Semitism to the philosopher Wilhelm Marr, whose heyday was the 1870s. Marr “charged Jews with using their skills ‘to conquer Germany from within.’ Marr was careful to distinguish his philosophy of anti-Semitism from prior philosophies of anti-Judaism. Jews “were taking unfair advantage of the emerging democratic order in Europe with its promise of individual rights and open competition in order to dominate the fields of finance, culture and social ideas.”

Wisse declared that “anti-Semitism channel[ed] grievance and blame against highly visible beneficiaries of freedom and opportunity.” “Are you unemployed? The Jews have your jobs. Is your family mired in poverty? The Rothschilds have your money. Do you feel more secure in the city than you did on the land? The Jews are trapping you in the factories and charging you exorbitant rents.”

The Jews were undermining Christianity. They were subtly perverting the legal system. They were overrunning the arts and monopolizing the press. They spread Communism, yet practiced rapacious capitalism!

This modern German philosophy of anti-Semitism long predated Nazism. It accompanied the growth of the German welfare state and German socialism. The authoritarian political roots of Nazism took hold under Otto von Bismarck’s conservative socialism, and so did Nazism’s anti-Semitic cultural roots as well. The anti-Semitic conspiracy theories ascribing Germany’s every ill to the Jews were not the invention of Hitler, but of Wilhelm Marr over half a century before Hitler took power.

The Link Between the Nazis and the Progressives: the War on Success

As Wisse notes, the key difference between modern anti-Semitism and its ancestor – what Wilhelm Marr called “anti-Judaism” – is that the latter abhorred the religion of the Jews while the former resented the disproportionate success enjoyed by Jews much more than their religious observances. The modern anti-Semitic conspiracy theorist pointed darkly to the predominance of Jews in high finance, in the press, in the arts and running movie studios and asked rhetorically: How do we account for the coincidence of our poverty and their wealth, if not through the medium of conspiracy and malefaction? The case against the Jews is portrayed as prima facie and morphs into per se through repetition.

Today, the Progressive Left operates in exactly the same way. “Corporation” is a pejorative. “Wall Street” is the antonym of “Main Street.” The very presence of wealth and high income is itself damning; “inequality” is the reigning evil and is tacitly assigned a pecuniary connotation. Of course, this tactic runs counter to the longtime left-wing insistence that capitalism is inherently evil because it forces us to adopt a materialistic perspective. Indeed, environmentalism embraces anti-materialism to this day while continuing to bunk in with its progressive bedfellows.

We must interrupt with an ironic correction. Economists – according to conventional thinking the high priests of materialism – know that it is human happiness and not pecuniary gain that is the ultimate desideratum. Yet the constant carping about “inequality” looks no further than money income in its supposed solicitude for our well-being. Thus, the “income-inequality” progressives – seemingly obsessed with economics and materialism – are really anti-economic. Economists, supposedly green-eyeshade devotees of numbers and models, are the ones focusing on human happiness rather than ideological goals.

German socialism metamorphosed into fascism. American Progressivism is morphing from liberalism to socialism and – ever more clearly – honing in on its own version of fascism. Both employed the technique of demonization and conspiracy to transform the mutual benefit of free voluntary exchange into the zero-sum result of plunder and theft. How else could productive effort be made to seem fruitless? How else could success be made over into failure? This is the cautionary warning Perkins was sounding.

The Great Exemplar

The great Cassandra of political economy was F.A. Hayek. Early in 1929, he predicted that Federal Reserve policies earlier in the decade would soon bear poisoned fruit in the form of a reduction in economic activity. (His mentor, Ludwig von Mises, was even more emphatic, foreseeing “a great crash” and refusing a prestigious financial post for fear of association with the coming disaster.) He predicted that the Soviet economy would fail owing to lack of a functional price system; in particular, missing capital markets and interest rates. He predicted that Keynesian policies begun in the 1950s would culminate in accelerating inflation. All these came true, some of them within months and some after a lapse of years.

Hayek’s greatest prediction was really a cautionary warning, in the same vein as Tom Perkins’ letter but much more detailed. The 1945 book The Road to Serfdom made the case that centralized economic planning could operate only at the cost of the free institutions that distinguished democratic capitalism. Socialism was really another form of totalitarianism.

The reaction to Hayek’s book was much the same as reaction to Perkins’ letter. Many commentators who should have known better have accused both of them of fascism. They also accused both men of describing a current state of affairs when both were really trying to avoida dystopia.

The flak Hayek took was especially ironic because his book actually served to prevent the outcome he feared. But instead of winning the acclaim of millions, this earned him the scorn of intellectuals. The intelligentsia insisted that Hayek predicted the inevitable succession of totalitarianism after the imposition of a welfare state. When welfare states in Great Britain, Scandinavia, and South America failed to produce barbed wire, concentration camps and German Shepherd dogs, the Left advertised this as proof of Hayek’s “exaggerations” and “paranoia.”

In actual fact, Great Britain underwent many of the changes Hayek had feared and warned against. The notorious “Rules of Engagements,” for instance, were an attempt by a Labor government to centrally control the English labor market – to specify an individual’s work and wage rather than allowing free choice in an impersonal market to do the job. The attempt failed just a dismally as Hayek and other free-market economists had foreseen it would. In the 1980s, it was Hayek’s arguments, wielded by Prime Minister Margaret Thatcher, which paved the way for the rolling back of British socialism and the taming of inflation. It’s bizarre to charge the prophet of doom with inaccuracy when his prophecy is the savior, but that’s what the Left did to Hayek.

Now they are working the same familiar con on Tom Perkins. They begin by misconstruing the nature of his argument. Later, if his warnings are successful, they will use that against him by claiming that his “predictions” were false.

Enriching Perkins’ Argument

This is not to say that Perkins’ argument is perfect. He has instinctively fingered the source of the threat to our liberties. Perkins himself may be rich, but argument isn’t; it is threadbare and skeletal. It could use some enriching.

The war on the wealthy has been raging for decades. The opening battle is lost to history, but we can recall some early skirmishes and some epic brawls prior to Perkins.

In Europe, the war on wealth used anti-Semitism as its spearhead. In the U.S., however, the popularity of Progressives in academia and government made antitrust policy a more convenient wedge for their populist initiatives against success. Antitrust policy was a crown jewel of the Progressive movement in the early 1900s; Presidents Theodore Roosevelt and William Howard Taft cultivated reputations as “trust busters.”

The history of antitrust policy exhibits two pronounced tendencies: the use of the laws to restrict competition for the benefit of incumbent competitors and the use of the laws by the government to punish successful companies for various political reasons. The sobering research of Dominick Armentano shows that antitrust policy has consistently harmed consumer welfare and economic efficiency. The early antitrust prosecution of Standard Oil, for example, broke up a company that had consistently increased its output and lowered prices to consumers over long time spans. The Orwellian rhetoric accompanying the judgment against ALCOA in the 1940s reinforces the notion that punishment, not efficiency or consumer welfare, was behind the judgment. The famous prosecutions of IBM and AT&T in the 1970s and 80s each spawned book-length investigations showing the perversity of the government’s claims. More recently, Microsoft became the latest successful firm to reap the government’s wrath for having the temerity to revolutionize industry and reward consumers throughout the world.

The rise of the regulatory state in the 1970s gave agencies and federal prosecutors nearly unlimited, unsupervised power to work their will on the public. Progressive ideology combined with self-interest to create a powerful engine for the demonization of success. Prosecutors could not only pursue their personal agenda but also climb the career ladder by making high-profile cases against celebrities. The prosecution of Michael Milken of Drexel Burnham Lambert is a classic case of persecution in the guise of prosecution. Milken virtually created the junk-bonk market, thereby originating an asset class that has enhanced the wealth of investors by untold billions or trillions of dollars. For his pains, Milken was sent to jail.

Martha Stewart is a high-profile celebrity who was, in effect, convicted of the crime of being famous. She was charged and convicted of lying to police about a case in which the only crime could have been the offense of insider-trading. But she was the trader and she was not charged with insider-trading. The utter triviality and absence of any damage to consumers or society at large make it clear that she was targeted because of her celebrity; e.g., her success.

Today, the impetus for pursuing successful individuals and companies today comes primarily from the federal level. Harvey Silverglate (author of Three Felonies Per Day) has shown that virtually nobody is safe from the depredations of prosecutors out to advance their careers by racking up convictions at the expense of justice.

Government is the institution charged with making and enforcing law, yet government has now become the chief threat to law. At the state and local level, governments hand out special favors and tax benefits to favored recipients – typically those unable to attain success on their own efforts – while making up the revenue from the earned income of taxpayers at large. At the federal level, Congress fails in its fundamental duty and ignores the law by refusing to pass budgets. The President appoints czars to make regulatory law, while choosing at discretion to obey the provisions of some laws and disregard others. In this, he fails his fundamental executive duty to execute the laws faithfully. Judges treat the Constitution as a backdrop for the expression of their own views rather than as a subject for textual fidelity. All parties interpret the Constitution to suit their own convenience. The overarching irony here is that the least successful institution in America has united in a common purpose against the successful achievers in society.

The most recent Presidential campaign was conducted largely as a jihad against the rich and successful in business. Mitt Romney was forced to defend himself against the charge of succeeding too well in his chosen profession, as well as the corollary accusation that his success came at the expense of the companies and workers in which his private-equity firm invested. Either his success was undeserved or it was really failure. There was no escape from the double bind against which he struggled.

It is clear, than, that the “progressivism” decried by Tom Perkins dates back over a century and that it has waged a war on wealth and success from the outset. The tide of battle has flowed – during the rampage of the Bull Moose, the Depression and New Deal and the recent Great Recession and financial crisis – and ebbed – under Eisenhower and Reagan. Now the forces of freedom have their backs to the sea.

It is this much-richer context that forms the backdrop for Tom Perkins’ warning. Viewed in this panoramic light, Perkins’ letter looks more and more like the battle cry of a counter-revolution than the crazed rant of an isolated one-percenter.

DRI-267 for week of 10-27-13: ObamaCare and the Point of No Return

An Access Advertising EconBrief:

ObamaCare and the Point of No Return

The rollout of ObamaCare – long-awaited by its friends, long-dreaded by its foes – took place last week. In this case, the term “rollout” is apropos, since the program is not exactly up on its feet. Tuesday, Oct. 22, 2013 marked the debut of HealthCare.gov, the ObamaCare website, where prospective customers of the program’s health-insurance exchanges go to apply for coverage. By comparison, Facebook’s IPO was a rip-roaring success.

A diary of highlights seems like the best way to do justice to this fiasco. We are indebted to the Heritage Foundation for the chronology and many of the specific details that follow.

Tuesday, Oct. 22, 2013: This is ribbon-cutting day for the website, through which ObamaCare’s state health-insurance exchangesexpect to do most of their business. One of the most fundamental reforms sought by free-market economists is the geographic market integration of health care in the U.S. Historically, each state has its own state laws and regulatory apparatus governing insurance. This hamstrings competition. It requires companies to deal with 50 different bureaucracies in order to compete nationally and limits consumers solely to companies offering policies in their state. But ObamaCare is dedicated to the proposition that health care of, by and for government shall not perish from the earth, so it not only perpetuates but complicates this setup by interposing the artificial creation of a health-care exchange for each state, operating under a federal aegis.

Only 36 of those state exchanges open for business on time today, however. Last-minute rehearsals have warned of impending chaos, and frantic responses have produced lateness. Sure enough, as the day wears on 47 states eventually report applicant complaints of “frequent error messages.” Despite massive volume on the ObamaCare site, there is almost no evidence of actual completed applications.

Wednesday, Oct. 23, 2013: The Los Angeles Times revises yesterday’s report of 5 million “hits” on HealthCare.gov from applicants in California downward just a wee bit, to 645,000. But there is still no definitive word on actual completed applications, leading some observers to wonder whether there are any.

Thursday, Oct. 24, 2013: The scarcity of actual purchasers of health insurance on the ObamaCare exchanges leads a Washington Post reporter to compare them in print to unicorns.  More serious, though, are the growing reports of thousands of policy cancellations suffered by Americans across the nation. The culprit is ObamaCare itself; victims’ current coverage doesn’t meet new ObamaCare guidelines on matters such as openness to pre-existing conditions. Ordinarily, a significant pre-existing health condition would preclude coverage or rate a high premium. In other words, writing policies that ignore pre-existing conditions is not insurance in the true, classical sense; insurance substitutes cost for risk and the former must be an increasing function of the latter in order for the process to make any sense. ObamaCare is not really about insurance, despite its protestations to the contrary.

Friday, Oct. 25, 2013: CNBC estimates that only 1% of website applicants can proceed fully to completion and obtain a policy online because the system cannot generate sufficient valid information to process the others. A few states – notably Kentucky – have reported thousands of successful policies issued, but the vast bulk of these now appear to be Medicaid enrollees rather than health-insurance policyholders. Meanwhile, the Department of Health and Human Services (HHS) announces that its website will be offline for repairs and upgrading.

Saturday, Oct. 26, 2013: In an interview with Fox News, Treasury Secretary Jack Lew refuses to cite a figure for completed applications on the HealthCare.gov website. Among those few that have successfully braved the process, premiums seem dramatically higher than those previously paid. One example was a current policyholder whose monthly premium of $228 ballooned to $1,208 on the new ObamaCare health-care exchange policy.

Monday, Oct.28, 2013: Dissatisfaction with the process of website enrollment is now so general that application via filling out paper forms has become the method of choice. It is highly ironic that well into the 21st century, a political administration touting its technological progressivity has fallen back on the tools of the 19th century to advance its signature legislative achievement.

Official Reaction

This diary of the reception to ObamaCare conveys the impression of a public that is more than sullen in its initial reaction to the program – it is downright mutinous. It was hardly surprising, then, that President Obama chose to respond to public complaints by holding a press conference in the White House Rose Garden a few days after rollout.

Mr. Obama’s attitude can best be described as “What’s the problem?” His tone combined the unique Obama blend of hauteur and familiarity. The Affordable Care Act, he insisted, was “not just a website.” If people were having trouble accessing the website or completing the application process or making contact with an insurance company to discuss an actual plan – why, then, they could just call the government on the phone and “talk to somebody directly and they can walk you through the application process.” (How many of the President’s listeners hearkened back at this point to their previous soul-satisfying experiences on the phone with, let’s say, the IRS?) This would take about 25 minutes for an individual, Mr. Obama assured his viewers, and about 45 minutes for a family. He gave out a 1-800 number for his viewers to call. Reviews of the President’s performance noted his striking resemblance to infomercial pitchmen.

Sean Hannity was so inspired by the President’s call to action that he resolved to heed it. He called the toll-free number on-air during his AM-radio show. He spoke with a call-center employee who admitted that “we’re having a lot of glitches in the system.” She read the script that she had been given to use in dealing with disgruntled callers. Hannity thanked her and complimented her on her courtesy and honesty. She was fired the next day. Hannity declared he would compensate her for one year’s lost salary and vowed to set up a fund for callers who wanted to contribute in her behalf.

Health and Human Services Secretary Kathleen Sebelius was next up on the firing line. Cabinet officials were touring eight cities and selected regional sites to promote the program and at Sebelius’s first stop at a community center in Austin, TX, she held a press conference to respond to public outrage with the glitches in the program.

On October 26, 2013, the Fox News website sported the headline: “Sebelius Suggests Republicans to Blame for ObamaCare Website Woes.” Had the Republican Party chosen the IT contractor responsible for setting up HealthCare.gov‘s website?

No. “Sebelius suggest[ed] that Republican efforts to delay and defund the law contributed to HealthCare.gov‘s glitch-ridden debut.” Really. How? Sebelius “conceded that there wasn’t enough testing done on the website, but added that her department had little flexibility to postpone the launch against the backdrop of Washington’s unforgiving politics. ‘In an ideal world, there would have been a lot more testing, but we did not have the luxury of that. And the law said the go-time was Oct. 1. And frankly, a political atmosphere where the majority party, at least in the House, was determined to stop this any way they possibly could…was not an ideal atmosphere.”

It takes the listener a minute or so to catch breath in the face of such effrontery. The Obama Administration had three years in which to prepare for launch of the program. True, there were numerous changes to the law and to administrative procedures, but these were all made by the administration itself for policy reasons. The Democrat Party, not the Republican Party, is the majority party. The Republican Party – no, make that the Tea Party wing of the Republican Party – proposed a debt-limit settlement in which the individual mandate for insurance-policy ownership would be delayed. It was rejected by the Obama Administration. Ms. Sebelius is blaming the Republican Party for the fact that Democrats were rushed when the Republicans in fact offered the Democrats a delay that the Democrats refused.

Were Ms. Sebelius a high-level executive in charge of rolling out a new product, her performance to date would result in her dismissal. But when queried about the possibility of stepping down, she responded “The majority of people calling for me to resign, I would say, are people I don’t work for and who did not want this program to work in the first place.” Parsing this statement yields some very uncomfortable conclusions. Ms. Sebelius’s employer is not President Obama or his administration; it is the American people. Anybody calling for her resignation is also an American. But clearly she does not see it that way. Obviously, the people calling for her resignation are Republicans. And she does not see herself as working for Republicans. The question is: Who is she working for?

Two possibilities stand out. Possibility number one is that she is working for the Democrat Party. In other words, she sees the executive branch as a spoils system belonging to the political party in power. Her allegiance is owed to the source of her employment; namely, her party. Possibility number two is that she sees her allegiance as owed to President Obama, her nominal boss. This might be referred to as the corporatist (as opposed to corporate) view of government, in which government plays the role of corporation and there are no shareholders.

Neither one of these possible conceptions is compatible with republican democracy, in which ultimate authority resides with the voters. In this case, the voters are expressing vocal dissatisfaction and Ms. Sebelius is telling them to take a hike. In a free-market corporation, Ms. Sebelius would be the one unfolding her walking papers and map.

Whose Back is Against the Wall?

It is tempting to conclude that ObamaCare is the Waterloo that the right wing has been predicting and planning for President Obama ever since Election Day, 2008. And this does have a certain superficial plausibility. ObamaCare is this Administration’s signature policy achievement – indeed, practically its only one. There is no doubt that the Administration looks bad, even by the relaxed standards of performance it set during the last five years.

Unfortunately, this view of President Obama with his back against the wall, despairing and fearful, contemplating resignation or impeachment, simply won’t survive close scrutiny. It is shattered by a sober review of Barack Obama’s past utterances on the subject of health care.

As a dedicated man of the Left, Barack Obama’s progressive vision of health care in America follows one guiding star: the single-payer system. That single payer is the federal government. Barack Obama and the progressive Left are irrevocably wedded to the concept of government ownership and control of health care, a la Great Britain’s National Health Care system. In speeches and interviews going back to the beginning of his career, Obama has pledged allegiance to this flag and to the collective for which it stands, one organic unity under government, indivisible, with totalitarianism and social justice for all.

The fact that ObamaCare is now collapsing around our ears may be temporarily uncomfortable for the Obama Administration, but it is in no way incompatible with this overarching goal. Just the opposite, in fact. In order to get from where we are now to a health-care system completely owned and operated by the federal government, our private system of doctors, hospitals and insurance companies must be either subjugated, occupied or destroyed, respectively. That process has now started in earnest.

Oh, the Administration would rather that private medicine went gentle into that good night. It would have preferred killing private health insurance via euthanasia rather than brutal murder, for example. But the end is what matters, not the means.

Certainly the Administration would have preferred to maintain its hypnotic grip on the loyalty of the mainstream news media. Instead, the members of the broadcast corps are reacting to ObamaCare’s meltdown as they did upon first learning that they were not the product of immaculate conception. But this is merely a temporary dislocation, not a permanent loss. What will the news media do when the uproar dies down – change party affiliation?

For anybody still unconvinced about the long-run direction events will take, the Wednesday, October 30, 2013 lead editorial in The Wall Street Journal is the clincher.

“Americans are Losing Their Coverage by Political Design”

“For all of the Affordable Care Act’s technical problems,” the editors observe, “at least one part is working on schedule. The law is systematically dismantling the private insurance market, as its architects intended from the start.”

It took a little foresight to see this back when the law was up for passage. The original legislation included a passage insisting that it should not “be construed to require than an individual terminate coverage that existed as of March 23, 2010.” This “Preservation of Right to Maintain Existing Coverage” was the fig leaf shielding President Obama’s now-infamous declaration that “if you like your existing policy, you can keep it.” Yeah, right.

Beginning in June, 2010, HHS started generating new regulations that chipped away at this “promise.” Every change in policy, no matter how minor, became an excuse for terminating existing coverage at renewal time. This explains the fact that some 2 million Americans have received cancellation notices from their current insurers. Of course, the Obama Administration has adopted the unified stance that these cancellations are the “fault” of the insurance companies – which is a little like blaming your broken back on your neighbor because he jumped out of the way when you fell off your roof instead of standing under you to cushion your fall. Stray callers to AM radio can be heard maintaining that at least half of these cancellations will be reinstated with new policies at lower cost in the ObamaCare exchanges. If only those hot-headed Tea Partiers would stop dumping boxes of tea and behaving like pirates! Alas, a Rube Goldberg imitation of a market cannot replace the genuine article – with apologies to Mr. Goldberg, whose roundabout contraptions actually worked.

ObamaCare creates 10 types of legally defined medical benefits. They include general categories like hospitalization and prescription drugs. No policy that fails to meet the exact standards defined within the law can survive the ObamaCare review. It is widely estimated that about 80% of all individual plans, which cover 7% of the U.S. population under age 65, will fall victim to the ObamaCare scythe.

The law is replete with Orwellian rhetoric of progressive liberalism. HHS defines its purpose as the “offer [of] a small number of meaningful choices.” Uh…what about allowing individuals to gauge the tradeoff between price and quality of care that best suits their own preferences, incomes and particular medical circumstances? No, that would have “allowed extremely wide variation across plans in the benefits offered “and thus “would not have assured consumers that they would have coverage for basic benefits.” This is doublespeak for “we are restricting your range of choice for your own good, dummy.”

Liberals typically respond with a mixture of outrage and indignation when exposed as totalitarians. It is certainly true that they are not eradicating freedom of choice merely for the pure fun of it. They must create a fictitious product called “insurance” to serve a comparatively small population of people who cannot be served by true insurance – people with pre-existing conditions that make them uninsurable or ratable at very high premiums or coverage exclusions. The exorbitant costs of serving this market through government require that the tail wag the dog – that the large number of young, healthy people pay ridiculously high premiums for a product they don’t want or need in order to balance the books on this absurd enterprise. (Formerly, governments simply borrowed the money to pay for such pay-as-you-go boondoggles, but the financial price tag on this modus operandi is now threatening to bring down European welfare states around the ears of their citizens – so this expedient is no longer viable.) In order to justify enrolling everybody and his brother-in-law in coverage, government has to standardize coverage by including just about every conceivable benefit and excluding practically nothing. After all, we’re forcing people to sign up so we can’t very well turn around and deny them coverage for something the way a real, live insurance company would, can we?

It is well known that the bulk of all medical costs arise from treating the elderly. In a rational system, this would be no problem because people would save for their own old age and generate the real resources necessary to fund it. But the wrong turn in our system began in World War II, when the tax-free status of employer-provided health benefits encouraged the substitution of job-related health insurance for the wage increases that were proscribed by wartime government wage and price controls. The gradual dominance of third-party payment for health care meant that demand went through the roof, dragging health-care prices upward with it.

Now Generation X finds itself stuck with the mother of all tabs by the President whom it elected. The Gen X’ers are paying Social Security taxes to support their feckless parents and grandparents, who sat still for a Ponzi scheme and now want their children to make good. To add injury to injury, the kids are also stuck with gigantic prices for involuntary “insurance” they don’t want and can’t afford to support their elders, the uninsurables – and the incredibly costly government machinery to administer it all.

It’s just as the old-time leftist revolutionaries used to say: you can’t make an omelette without breaking eggs. Across the nation, we have heard the sound of eggs cracking for the last week.

The Point of No Return

The “point of no return” is a familiar principle in international aviation. It is the point beyond which is it closer to the final destination than to the point of origination, or the point beyond which it makes no sense to turn back. This is particularly applicable to trans-oceanic travel, where engine trouble or some other unexpected problem might make the fastest possible landing necessary.

In our case, the Obama Administration has kept this concept firmly in mind. By embroiling as many Americans as deeply as possible in the tentacles of government, President Obama intends to create a state of affairs in which – no matter how bad the current operation of ObamaCare may be – it will seem preferable to most Americans to go forward to a completely government-run system rather than “turn back the clock” to a free-market system.

A free-market system works because competition works. On the supply side of the market, eliminating state regulation of insurance would enable companies to expand across state borders and compete with each other. But this involves relying upon companies to serve consumers. And companies are the entities that just got through issuing all those cancellation notices. For millions of Americans today, the only disciplinary mechanism affecting companies is something called “government regulation” that forces them to do “the right thing” by bludgeoning them into submission. That is what regulatory agencies are doing right now – beating up on Wall Street firms and banks for causing the financial crisis of 2008 and ensuing Great Recession. The fact that this never seems to prevent the next crisis doesn’t seem to penetrate the public consciousness, for the only antidote for the failure of government regulation is more and stronger government regulation.

On the demand side of a free market, consumers scrutinize the products and services available at alternative prices and choose the ones they prefer the most. But consumers are not used to buying their own health care and vaguely feel that the idea is both dishonest and unfair. “Health care should be a right, not a privilege,” is the rallying cry of the left wing – as if proclaiming this state of affairs is tantamount to executing it. No such thing as a guaranteed right to goods and services can exist, since giving one person a political right to goods is the same thing as denying the right to others. In the financial sense, somebody must pay for the goods provided. In the real sense, virtually all goods are produced using resources that have alternative uses, so producing more of some goods always means producing fewer other goods.

This is not what the “health-care-should-be-a-right-not-a-privilege” proclaimers are talking about. Their idea is that we will give everybody more of this one thing – health care – and have everything else remain the same as it is now. That is a fantasy. But this fantasy is the prevailing mental state throughout much of the nation. One widely quoted comment by a bitterly disappointed victim of policy cancellation is revealing: “I was all for ObamaCare until I found out I was going to have to pay for it.” On right-wing talk radio, this remark is considered proof of public disillusion with President Obama. But note: The victim did not say: “I was all for ObamaCare until I found out what I was going to have to pay for it.” The distinction is vital. Today, a free lunch is considered only fitting and proper in health care. And the only free lunch to be had is the pseudo-free lunch offered by a government-run, single-payer system.

As it stands now, few if any Americans can recall what it was like to pay for their own health care. Few have experienced a true free market in medicine and health care. Thus, they will be taking the word of economists on faith that it would be preferable to a government-run system like the one in Great Britain. It is a tribute to the power of ideas that a commentator like Rush Limbaugh can make repeated references to individuals paying for their own care without generating a commercially fatal outpouring of outrage from his audience.

Grim as this depiction may seem, it accurately describes the dilemma we face.