DRI-172 for week of 7-5-15: How and Why Did ObamaCare Become SCOTUSCare?

An Access Advertising EconBrief:

How and Why Did ObamaCare Become SCOTUSCare?

On June 25, 2015, the Supreme Court of the United States delivered its most consequential opinion in recent years in King v. Burwell. King was David King, one of various Plaintiffs opposing Sylvia Burwell, Secretary of Health, Education and Welfare. The case might more colloquially be called “ObamaCare II,” since it dealt with the second major attempt to overturn the Obama administration’s signature legislative achievement.

The Obama administration has been bragging about its success in attracting signups for the program. Not surprisingly, it fails to mention two facts that make this apparent victory Pyrrhic. First, most of the signups are people who lost their previous health insurance due to the law’s provisions, not people who lacked insurance to begin with. Second, a large chunk of enrollees are being subsidized by the federal government in the form of a tax credit for the amount of the insurance.

The point at issue in King v. Burwell is the legality of this subsidy. The original legislation provides for health-care exchanges established by state governments, and proponents have been quick to cite these provisions to pooh-pooh the contention that the Patient Protection and Affordable Care Act (PPACA) ushered in a federally-run, socialist system of health care. The specific language used by PPAACA in Section 1401 is that the IRS can provide tax credits for insurance purchased on “exchanges run by the State.” That phrase appears 14 times in Section 1401 and each time it clearly refers to state governments, not the federal government. But in actual practice, states have found it excruciatingly difficult to establish these exchanges and many states have refused to do so. Thus, people in those states have turned to the federal-government website for health insurance and have nevertheless received a tax credit under the IRS’s interpretation of statute 1401. That interpretation has come to light in various lawsuits heard by lower courts, some of which have ruled for plaintiffs and against attempts by the IRS and the Obama administration to award the tax credits.

Without the tax credits, many people on both sides of the political spectrum agree, PPACA will crash and burn. Not enough healthy people will sign up for the insurance to subsidize those with pre-existing medical conditions for whom PPACA is the only source of external funding for medical treatment.

To a figurative roll of drums, the Supreme Court of the United States (SCOTUS) released its opinion on June 25, 2015. It upheld the legality of the IRS interpretation in a 6-3 decision, finding for the government and the Obama administration for the second time. And for the second time, the opinion for the majority was written by Chief Justice John Roberts.

Roberts’ Rules of Constitutional Disorder

Given that Justice Roberts had previously written the opinion upholding the constitutionality of the law, his vote here cannot be considered a complete shock. As before, the shock was in the reasoning he used to reach his conclusion. In the first case (National Federation of Independent Businesses v. Sebelius, 2012), Roberts interpreted a key provision of the law in a way that its supporters had categorically and angrily rejected during the legislative debate prior to enactment and subsequently. He referred to the “individual mandate” that uninsured citizens must purchase health insurance as a tax. This rescued it from the otherwise untenable status of a coercive consumer directive – something not allowed under the Constitution.

Now Justice Roberts addressed the meaning of the phrase “established by the State.” He did not agree with one interpretation previously made by the government’s Solicitor General, that the term was an undefined term of art. He disdained to apply a precedent established by the Court in a previous case involving interpretation of law by administration agencies, the Chevron case. The precedent said that in cases where a phrase was ambiguous, a reasonable interpretation by the agency charged with administering the law would rule. In this case, though, Roberts claimed that since “the IRS…has no expertise in crafting health-insurance policy of this sort,” Congress could not possibly have intended to grant the agency this kind of discretion.

No, Roberts is prepared to believe that “established by the State” does not mean “established by the federal government,” all right. But he says that the Supreme Court cannot interpret the law this way because it will cause the law to fail to achieve its intended purpose. So, the Court must treat the wording as ambiguous and interpret it in such a way as to advance the goals intended by Congress and the administration. Hence, his decision for defendant and against plaintiffs.

In other words, he rejected the ability of the IRS to interpret the meaning of the phrase “established by the State” because of that agency’s lack of health-care-policy expertise, but is sufficiently confident of his own expertise in that area to interpret its meaning himself; it is his assessment of the market consequences that drives his decision to uphold the tax credits.

Roberts’ opinion prompted one of the most scathing, incredulous dissents in the history of the Court, by Justice Antonin Scalia. “This case requires us to decide whether someone who buys insurance on an exchange established by the Secretary gets tax credits,” begins Scalia. “You would think the answer would be obvious – so obvious that there would hardly be a need for the Supreme Court to hear a case about it… Under all the usual rules of interpretation… the government should lose this case. But normal rules of interpretation seem always to yield to the overriding principle of the present Court – the Affordable Care Act must be saved.”

The reader can sense Scalia’s mounting indignation and disbelief. “The Court interprets [Section 1401] to award tax credits on both federal and state exchanges. It accepts that the most natural sense of the phrase ‘an exchange established by the State’ is an exchange established by a state. (Understatement, thy name is an opinion on the Affordable Care Act!) Yet the opinion continues, with no semblance of shame, that ‘it is also possible that the phrase refers to all exchanges.’ (Impossible possibility, thy name is an opinion on the Affordable Care Act!)”

“Perhaps sensing the dismal failure of its efforts to show that ‘established by the State’ means ‘established by the State and the federal government,’ the Court tries to palm off the pertinent statutory phrase as ‘inartful drafting.’ The Court, however, has no free-floating power to rescue Congress from their drafting errors.” In other words, Justice Roberts has rewritten the law to suit himself.

To reinforce his conclusion, Scalia concludes with “…the Court forgets that ours is a government of laws and not of men. That means we are governed by the terms of our laws and not by the unenacted will of our lawmakers. If Congress enacted into law something different from what it intended, then it should amend to law to conform to its intent. In the meantime, Congress has no roving license …to disregard clear language on the view that … ‘Congress must have intended’ something broader.”

“Rather than rewriting the law under the pretense of interpreting it, the Court should have left it to Congress to decide what to do… [the] Court’s two cases on the law will be remembered through the years. And the cases will publish the discouraging truth that the Supreme Court favors some laws over others and is prepared to do whatever it takes to uphold and assist its favorites… We should start calling this law SCOTUSCare.”

Jonathan Adler of the much-respected and quoted law blog Volokh Conspiracy put it this way: “The umpire has decided that it’s okay to pinch-hit to ensure that the right team wins.”

And indeed, what most stands out about Roberts’ opinion is its contravention of ordinary constitutional thought. It is not the product of a mind that began at square one and worked its way methodically to a logical conclusion. The reader senses a reversal of procedure; the Chief Justice started out with a desired conclusion and worked backwards to figure out how to justify reaching it. Justice Scalia says as much in his dissent. But Scalia does not tell us why Roberts is behaving in this manner.

If we are honest with ourselves, we must admit that we do not know why Roberts is saying what he is saying. Beyond question, it is arbitrary and indefensible. Certainly it is inconsistent with his past decisions. There are various reasons why a man might do this.

One obvious motivation might be that Roberts is being blackmailed by political supporters of the PPACA, within or outside of the Obama administration. Since blackmail is not only a crime but also a distasteful allegation to make, nobody will advance it without concrete supporting evidence – not only evidence against the blackmailer but also an indication of his or her ammunition. The opposite side of the blackmail coin is bribery. Once again, nobody will allege this publicly without concrete evidence, such as letters, tapes, e-mails, bank account or bank-transfer information. These possibilities deserve mention because they lie at the head of a short list of motives for betrayal of deeply held principles.

Since nobody has come forward with evidence of malfeasance – or is likely to – suppose we disregard that category of possibility. What else could explain Roberts’ actions? (Note the plural; this is the second time he has sustained PPACA at the cost of his own integrity.)

Lord Acton Revisited

To explain John Roberts’ actions, we must develop a model of political economy. That requires a short side trip into the realm of political philosophy.

Lord Acton’s famous maxim is: “Power corrupts; absolute power corrupts absolutely.” We are used to thinking of it in the context of a dictatorship or of an individual or institution temporarily or unjustly wielding power. But it is highly applicable within the context of today’s welfare-state democracies.

All of the Western industrialized nations have evolved into what F. A. Hayek called “absolute democracies.” They are democratic because popular vote determines the composition of representative governments. But they are absolute in scope and degree because the administrative agencies staffing those governments are answerable to no voter. And increasingly the executive, legislative and judicial branches of the governments wield powers that are virtually unlimited. In practical effect, voters vote on which party will wield nominal executive control over the agencies and dominate the legislature. Instead of a single dictator, voters elect a government body with revolving and rotating dictatorial powers.

As the power of government has grown, the power at stake in elections has grown commensurately. This explains the burgeoning amounts of money spent on elections. It also explains the growing rancor between opposing parties, since ordinary citizens perceive the loss of electoral dominance to be subjugation akin to living under a dictatorship. But instead of viewing this phenomenon from the perspective of John Q. Public, view it from within the brain of a policymaker or decisionmaker.

For example, suppose you are a completely fictional Chairman of a completely hypothetical Federal Reserve Board. We will call you “Bernanke.” During a long period of absurdly low interest rates, a huge speculative boom has produced unprecedented levels of real-estate investment by banks and near-banks. After stoutly insisting for years on the benign nature of this activity, you suddenly perceive the likelihood that this speculative boom will go bust and some indeterminate number of these financial institutions will become insolvent. What do you do? 

Actually, the question is really more “What do you say?” The actions of the Federal Reserve in regulating banks, including those threatened with or undergoing insolvency, are theoretically set down on paper, not conjured up extemporaneously by the Fed Chairman every time a crisis looms. These days, though, the duties of a Fed Chairman involve verbal reassurance and massage as much as policy implementation. Placing those duties in their proper light requires that our side trip be interrupted with a historical flashback.

Let us cast our minds back to 1929 and the onset of the Great Depression in the United States. At that time, virtually nobody foresaw the coming of the Depression – nobody in authority, that is. For many decades afterwards, the conventional narrative was that President Herbert Hoover adopted a laissez faire economic policy, stubbornly waiting for the economy to recover rather than quickly ramping up government spending in response to the collapse of the private sector. Hoover’s name became synonymous with government passivity in the face of adversity. Makeshift shanties and villages of the homeless and dispossessed became known as “Hoovervilles.”

It took many years to dispel this myth. The first truthteller was economist Murray Rothbard in his 1962 book America’s Great Depression, who pointed out that Hoover had spent his entire term in a frenzy of activism. Far from remaining a pillar of fiscal rectitude, Hoover had presided over federal deficit spending so large that his successor, Democrat Franklin Delano Roosevelt, campaigned on a platform of balancing the federal-government budget. Hoover sternly warned corporate executives not to lower wages and officially adopted an official stance in favor of inflation.

Professional economists ignored Rothbard’s book in droves, as did reviewers throughout the mass media. Apparently the fact that Hoover’s policies failed to achieve their intended effects persuaded everybody that he couldn’t have actually followed the policies he did – since his actual policies were the very policies recommended by mainstream economists to counteract the effects of recession and Depression and were largely indistinguishable in kind, if not in degree, from those followed later by Roosevelt.

The anathematization of Herbert Hoover drover Hoover himself to distraction. The former President lived another thirty years, to age ninety, stoutly maintaining his innocence of the crime of insensitivity to the misery of the poor and unemployed. Prior to his presidency, Hoover had built reputation as one of the great humanitarians of the 20th century by deploying his engineering and organizational skills in the cause of disaster relief across the globe. The trashing of his reputation as President is one of history’s towering ironies. As it happened, his economic policies were disastrous, but not because he didn’t care about the people. His failure was ignorance of economics – the same sin committed by his critics.

Worse than the effects of his policies, though, was the effect his demonization has had on subsequent policymakers. We do not remember the name of the captain of the California, the ship that lay anchored within sight of the Titanic but failed to answer distress calls and go to the rescue. But the name of Hoover is still synonymous with inaction and defeat. In politics, the unforgivable sin became not to act in the face of any crisis, regardless of the consequences.

Today, unlike in Hoover’s day, the Chairman of the Federal Reserve Board is the quarterback of economic policy. This is so despite the Fed’s ambiguous status as a quasi-government body, owned by its member banks with a leader appointed by the President. Returning to our hypothetical, we ponder the dilemma faced by the Chairman, “Bernanke.”

Bernanke only directly controls monetary policy and bank regulation. But he receives information about every aspect of the U.S. economy in order to formulate Fed policy. The Fed also issues forecasts and recommendations for fiscal and regulatory policies. Even though the Federal Reserve is nominally independent of politics and from the Treasury department of the federal government, the Fed’s policies affect and are affected by government policies.

It might be tempting to assume that Fed Chairmen know what is going to happen in the economic future. But there is no reason to believe that is true. All we need do is examine their past statements to disabuse ourselves of that notion. Perhaps the popping of the speculative bubble that Bernanke now anticipates will produce an economic recession. Perhaps it will even topple the U.S. banking system like a row of dominoes and produce another Great Depression, a la 1929. But we cannot assume that either. The fact that we had one (1) Great Depression is no guarantee that we will have another one. After all, we have had 36 other recessions that did not turn into Great Depressions. There is nothing like a general consensus on what caused the Depression of the 1920s and 30s. (The reader is invited to peruse the many volumes written by historians, economic and non-, on the subject.) About the only point of agreement among commentators is that a large number of things went wrong more or less simultaneously and all of them contributed in varying degrees to the magnitude of the Depression.

Of course, a good case might be made that it doesn’t matter whether Fed Chairman can foresee a coming Great Depression or not. Until recently, one of the few things that united contemporary commentators was their conviction that another Great Depression was impossible. The safeguards put in place in response to the first one had foreclosed that possibility. First, “automatic stabilizers” would cause government spending to rise in response to any downturn in private-sector spending, thereby heading off any cumulative downward movement in investment and consumption in response to failures in the banking sector. Second, the Federal Reserve could and would act quickly in response to bank failures to prevent the resulting reverse-multiplier effect on the money supply, thereby heading off that threat at the pass. Third, bank regulations were modified and tightened to prevent failures from occurring or restrict them to isolated cases.

Yet despite everything written above, we can predict confidently that our fictional “Bernanke” would respond to a hypothetical crisis exactly as the real Ben Bernanke did respond to the crisis he faced and later described in the book he wrote about it. The actual and predicted responses are the same: Scare the daylights out of the public by predicting an imminent Depression of cataclysmic proportions and calling for massive government spending and regulation to counteract it. Of course, the real-life Bernanke claimed that he and Treasury Secretary Henry O’Neill correctly foresaw the economic future and were heroically calling for preventive measures before it was too late. But the logic we have carefully developed suggests otherwise.

Nobody – not Federal Reserve Chairmen or Treasury Secretaries or California psychics – can foresee Great Depressions. Predicting a recession is only possible if the cyclical process underlying it is correctly understood, and there is no generally accepted theory of the business cycle. No, Bernanke and O’Neill were not protecting America with their warning; they were protecting themselves. They didn’t know that a Great Depression was in the works – but they did know that they would be blamed for anything bad that did happen to the economy. Their only way of insuring against that outcome – of buying insurance against the loss of their jobs, their professional reputations and the possibility of historical “Hooverization” – was to scream for the biggest possible government action as soon as possible. 

Ben Bernanke had been blasé about the effects of ultra-low interest rates; he had pooh-poohed the possibility that the housing boom was a bubble that would burst like a sonic boom with reverberations that would flatten the economy. Suddenly he was confronted with a possibility that threatened to make him look like a fool. Was he icy cool, detached, above all personal considerations? Thinking only about banking regulations, national-income multipliers and the money supply? Or was he thinking the same thought that would occur to any normal human being in his place: “Oh, my God, my name will go down in history as the Herbert Hoover of Fed chairmen”?

Since the reasoning he claims as his inspiration is so obviously bogus, it is logical to classify his motives as personal rather than professional. He was protecting himself, not saving the country. And that brings us to the case of Chief Justice John Roberts.

Chief Justice John Roberts: Selfless, Self-Interested or Self-Preservationist?

For centuries, economists have identified self-interest as the driving force behind human behavior. This has exasperated and even angered outside observers, who have mistaken self-interest for greed or money-obsession. It is neither. Rather, it merely recognizes that the structure of the human mind gives each of us a comparative advantage in the promotion of our own welfare above that of others. Because I know more about me than you do, I can make myself happier than you can; because you know more about you than I do, you can make yourself happier than I can. And by cooperating to share our knowledge with each other, we can make each other happier through trade than we could be if we acted in isolation – but that cooperation must preserve the principle of self-interest in order to operate efficiently.

Strangely, economists long assumed that the same people who function well under the guidance of self-interest throw that principle to the winds when they take up the mantle of government. Government officials and representatives, according to traditional economics textbooks, become selfless instead of self-interested when they take office. Selflessness demands that they put the public welfare ahead of any personal considerations. And just what is the “public welfare,” exactly? Textbooks avoided grappling with this murky question by hiding behind notions like a “social welfare function” or a “community indifference curve.” These are examples of what the late F. A. Hayek called “the pretense of knowledge.”

Beginning in the 1950s, the “public choice” school of economics and political science was founded by James Buchanan and Gordon Tullock. This school of thought treated people in government just like people outside of government. It assumed that politicians, government bureaucrats and agency employees were trying to maximize their utility and operating under the principle of self-interest. Because the incentives they faced were radically different than those faced by those in the private sector, outcomes within government differed radically from those outside of government – usually for the worse.

If we apply this reasoning to members of the Supreme Court, we are confronted by a special kind of self-interest exercised by people in a unique position of power and authority. Members of the Court have climbed their career ladder to the top; in law, there are no higher rungs. This has special economic significance.

When economists speak of “competition” among input-suppliers, we normally speak of people competing with others doing the same job for promotion, raises and advancement. None of these are possible in this context. What about more elevated kinds of recognition? Well, there is certainly scope for that, but only for the best of the best. On the current court, positive recognition goes to those who write notable opinions. Only Judge Scalia has the special talent necessary to stand out as a legal scholar for the ages. In this sense, Judge Scalia is “competing” with other judges in a self-interested way when he writes his decisions, but he is not competing with his fellow judges. He is competing with the great judges of history – John Marshall, Oliver Wendell Holmes, Louis Brandeis, and Learned Hand – against whom his work is measured. Otherwise, a judge can stand out from the herd by providing the deciding or “swing” vote in close decisions. In other words, he can become politically popular or unpopular with groups that agree or disagree with his vote. Usually, that results in transitory notoriety.

But in historic cases, there is the possibility that it might lead to “Hooverization.”

The bigger government gets, the more power it wields. More government power leads to more disagreement about its role, which leads to more demand to arbitration by the Supreme Court. This puts the Court in the position of deciding the legality of enactments that claim to do great things for people while putting their freedoms and livelihoods in jeopardy. Any judge who casts a deciding vote against such a measure will go down in history as “the man who shot down” the Great Bailout/the Great Health Care/the Great Stimulus/the Great Reproductive Choice, ad infinitum.

Almost all Supreme Court justices have little to gain but a lot to lose from opposing a measure that promotes government power. They have little to gain because they cannot advance further or make more money and they do not compete with J. Marshall, Holmes, Brandeis or Hand. They have a lot to lose because they fear being anathematized by history, snubbed by colleagues, picketed or assassinated in the present day, and seeing their children brutalized by classmates or the news media. True, they might get satisfaction from adhering to the Constitution and their personal conception of justice – if they are sheltered under the umbrella of another justice’s opinion or they can fly under the radar of media scrutiny in a relatively low-profile case.

Let us attach a name to the status occupied by most Supreme Court justices and to the spirit that animates them. It is neither self-interest nor selflessness in their purest forms; we shall call it self-preservation. They want to preserve the exalted status they enjoy and they are not willing to risk it; they are willing to obey the Constitution, observe the law and speak the truth but only if and when they can preserve their position by doing so. When they are threatened, their principles and convictions suddenly go out the window and they will say and do whatever it takes to preserve what they perceive as their “self.” That “self” is the collection of real income, perks, immunities and prestige that go with the status of Supreme Court Justice.

Supreme Court Justice John Roberts is an example of the model of self-preservation. In both of the ObamaCare decisions, his opinions for the majority completely abdicated his previous conservative positions. They plumbed new depths of logical absurdity – legal absurdity in the first decision and semantic absurdity in the second one. Yet one day after the release of King v. Burwell, Justice Roberts dissented in the Obergefell case by chiding the majority for “converting personal preferences into constitutional law” and disregarding clear meaning of language in the laws being considered. In other words, he condemned precisely those sins he had himself committed the previous day in his majority opinion in King v. Burwell.

For decades, conservatives have watched in amazement, scratching their heads and wracking their brains as ostensibly conservative justices appointed by Republican presidents unexpectedly betrayed their principles when the chips were down, in high-profile cases. The economic model developed here lays out a systematic explanation for those previously inexplicable defections. David Souter, Anthony Kennedy, John Paul Stevens and Sandra Day O’Connor were the precursors to John Roberts. These were not random cases. They were the systematic workings of the self-preservationist principle in action.

DRI-192 for week of 5-24-15: Why Incremental Reform of Government Is a Waste of Time

An Access Advertising EconBrief:

Why Incremental Reform of Government Is a Waste of Time

Any adult America who follows politics has seen it, heard it and read it ad infinitum. A person of prominence proposes to reform government. The reform is supposed to “make government work better.” Nothing earthshaking, understand, just something to improve the dreadful state that confronts us. And if there’s one thing that everybody agrees on, it’s that government is a mess.

Newspapers turn them out by the gross – it’s one of the few things that newspapers still publish in bulk. They can be found virtually every day in opinion sections. Let’s look at a brand-spanking new one, bright and shiny, just off the op-ed assembly line. It appeared in The Wall Street Journal (5/27/2015).The two authors are a former governor of Michigan (John Engler) and a current President of the North America Building Trades Unions (Sean McGarvey). The title – “It’s Amazing Anything Ever Gets Built” – aptly expresses the current level of exasperation with day-to-day government.

The authors think that infrastructure in America – “airports, factories, power plants and factories” are cited specifically – is absurdly difficult to build, improve and replace. The difficulty, they feel, is mostly in acquiring government permission to proceed. “The permitting process for infrastructure projects… is burdensome, slow and inconsistent.” Why? “Gaining approval to build a new bridge or factory typically involves review by multiple federal agencies – such as the Environmental Protection Agency, the U.S. Forest Service, the Interior Department, the U.S. Army Corps of Engineers and the Bureau of Land Management – with overlapping jurisdictions and no real deadlines. Often, no single federal entity is responsible for managing the process. Even after a project is granted permits, lawsuits can hold things up for years – or, worse, halt a half-completed construction project.”

Gracious. These are men with impressive-sounding titles and prestigious resumes. They traffic in the measured prose of editorialists rather than the adjective-strewn rhetoric of alarmists. And their language seems all the more reasonable for its careful wording and conclusions. Naturally, having taken good care to gain the reader’s attention, they now hold it with an example: “The $3 billion Transwest Express [is] a multi-state power line that would bring upward of 3,000 megawatts of wind-generated electricity from Wyoming to about 1.8 million homes and businesses from Las Vegas to San Diego. The project delivers on two of President Obama’s priorities, renewable power and job creation, so the administration in October 2011 named [it] one of seven transmission projects to ‘quickly advance’ through federal permitting.”

You guessed it; the TransWest Express “has languished under federal review since 2007.” That’s eight (count ’em) years for a project that the Obama administration favors; we can all imagine how less well-regarded projects are doing, can’t we? In fact, we don’t have to use our imaginations, since we have the example of the Keystone XL Pipeline before us.

Last month, the Bureau of Land Management pronounced the ink dry on an environmental-impact statement well done. That left only the EPA, the Federal Highway Administration, the Corps of Engineers, the Forest Service, the National Park Service, the Bureau of Reclamation, the U.S. Fish and Wildlife Service (!) and the Bureau of Indian Affairs (!!) to be heard from. At the rate these agencies are careening through the approval process, the TransWest Express should come online about the time that the world supply of fossil fuels is entirely extinguished – a case of exquisitely timed federal permitting.

According to Messrs. Engler and McGarvey, the worst thing about this egregious case study in federal-government overreach is that it leaves “thousands of skilled craft construction workers [to] sit on their hands.” Apparently, the Obama administration was in general agreement with this line of thought, because “President Obama’s Jobs Council examined how other countries expedite the approval of large projects” and its gaze fell upon Australia.

“Australia used to be plagued with overlapping layers of regulatory jurisdiction that resemble the current regulatory structure in the U.S.” before it installed the type of reform that the two authors are laying before us. The Australian province of New South Wales “now prioritizes permit applications based on their potential economic impact, and agreements among various reviewing agencies ensure that projects are subject to a single set of requirements.” As a result of this sunburst of reformist illumination, “permitting times have shrunk… from a once-typical 249 days to 134 days.”

Mind you, that was the President’s Jobs Council talking, not the authors. And the President, listening intently, created an “interagency council… dedicated to streamlining the permitting process.” Just to make sure we knew the President wasn’t kidding, “the White House also launched an online dashboard to track the progress of select federal permit applications.”

At this point, readers might envision the two authors reading their op-ed to a live audience consisting of Wall Street Journal readers – who would greet the previous two paragraphs with a few seconds of incredulous silence, followed by gales of hilarious laughter. Doubtless sensing the pregnancy of these passages, the authors follow with some rhetorical throat-clearing: “It has become clear, however, that congressional action is needed to make these improvements permanent and to require meaningful schedules and deadlines for permit review. Fortunately, Sens. Rob Portman (R-Ohio) and Claire McCaskill (D-Mo.) have introduced the Federal Permitting Improvement Act.”

“The bill would require the government to designate a lead agency to manage the review process when permits from multiple agencies are needed. It would establish a new executive office to oversee the speed of permit processing and to maintain the online dashboard that tracks applications.”

“The bill would also impose sensible limits on the subsequent judicial review of permits by reducing the statute of limitations on environmental lawsuits from six years to two years and by requiring courts to weigh potential job losses when considering injunction requests.”

Ah-hah. Let’s summarize this. President Obama, whose world renown for taking unilateral action to achieve his ends was earned by his selective ignoring and rewriting of law, confronted a situation in which two of his administration’s priorities were being thwarted by federal agencies over which he, as the nation’s Chief Executive, wielded administrative power. What action did he take? He turned to a presidential council – a century-old political buckpassing dodge to avoid making a decision. The council proceeded to do a study – another political wheeze that dates back at least to the 19th century and has never failed to waste money while failing to solve the problem at hand. When the study ostensibly uncovered an administrative reform purporting to achieve incremental gains in efficiency, the President (a) “streamlined the process” by telling two of the agencies who were creating the worst problems in the first place to cooperate with each other via an additional layer of bureaucracy (an “interagency council”) and created an “online dashboard” so that we could all watch the ensuing slow-motion failure more closely. All these Presidential actions took place in 2011. It is now mid-2015.

And what do our two intrepid authors propose to deal with this metastatic bureaucratic cancer? Congress will point its collective finger at one of the agencies causing the original problem and give it more power by making it “manager” of the review process. (This action implies that the root cause of the problem is that somebody in government doesn’t have enough power.) Of course, the premise that “permits from multiple agencies are needed” is taken completely for granted. Next, Congress would establish still another layer of bureaucracy (the “executive office”) to “oversee” the very problem that is supposedly being solved (e.g., “speed of permit processing”). (This implies that we have uncovered two more root causes of the problem – not enough layers of bureaucracy and not enough oversight exercised by bureaucrats.) A classic means of satisfying everybody in government is by getting every branch of government into the act. Accordingly, Congress points its collective finger at “the courts” and tells them to “weigh” job losses when considering requests for injunctions against projects. (The fact that this conflicts with the original “potential economic impact” mandate doesn’t seem to have concerned Congress or, for that matter, Messrs. Engler and McGarvey.) Finally, Congress throws a last glance at this unfolding Titanic scenario and, collective chins resting on fists, rearranges one last deck chair with a four-year reduction in the statute of limitations on environmental lawsuits.

The most amazing thing is not that anything ever gets built, but that these two authors could restrain their own laughter long enough to submit this op-ed for publication. The above summary reads more like a parody submitted for consideration by Saturday Night Live or Penn and Teller.

Two questions zoom, rocket-like, to the reader’s lips upon reading this op-ed and the above summary. What good, if any, could possibly result from this kind of proposal? Why do these proposals pop up with monotonous regularity in public print? The answers to those questions give rise in turn to a third question: What are the elements of a truly effective program for government reform and why has it not emerged?

Why Doesn’t Incremental Reform Work? 

The reform proposed by Messrs. Engler and McGarvey is best characterized as “incremental” because it does not change the structure of government in any fundamental way; it merely tinkers with its operational details. It aims merely to change one small part of the vast federal regulatory apparatus (permitting) by improving one element (its speed of operation) to a noticeable but modest degree (reduce average [?] time needed to secure a permit from 269 days to 134 days). And the rhetoric employed by the authors stresses this point – aside from the attention-grabbing headline, they are at pains to emphasize their modest goal as a major selling point of their proposal. They’re not trying to change the world here. “Americans of all stripes know that something is seriously wrong when other advanced countries can build infrastructure faster and more efficiently than the U.S., the country that built the Hoover Dam.” They use words like “bipartisan proposal” and “strengthen the administration’s efforts” rather than heaping ridicule on the blatant hypocrisy and stark contradiction of the Obama administration’s actions. They want to get a bill passed. But do they want actual reform?

Superficially, it seems odd that two authors would propose reform while opposing reform. Yet close inspection confirms that hypothesis not only for this op-ed, but in general. The authors deploy the standard op-ed bureaucratic argle-bargle that we have absorbed by osmosis from thousands of other op-eds – “infrastructure,” “permitting,” “priorities,” “job creation,” “streamline [government] process,” “expedite approval,” “implemented reforms,” “economic impact,” “manage the review process,” “lead agency,” “executive office.” The trouble is that if all this really worked, we wouldn’t be where we are today. The TransWest Express review wouldn’t have begun in 2007 and still be in limbo today. The Obama Administration wouldn’t have started remedial measures in 2011 and still be waiting on them to take effect in 2015. The U.S. wouldn’t be staggering under a cumulative debt load exceeding its GDP. The federal government wouldn’t have unfunded liabilities exceeding $24 trillion. The Western world wouldn’t be supporting a welfare state that is teetering on the brink of collapse.

Who are John Engler and Sean McGarvey? John Engler was formerly the Governor of Michigan. At one time, he was considered the bright hope of the Republican Party. He began by trying to reform state government in Michigan. He failed. Instead, he was co-opted by big government. Detroit went on to declare bankruptcy. John Engler left office and went to work for the Business Roundtable. Business organizations like the Chamber of Commerce exist today for the same reason that other special-interest organizations like La Raza and AARP exist – to secure special government favors for their members and protect them from being skewered by the special favors doled out to other special-interest organizations. Sean McGarvey is President of North America’s Building Trades Unions, a department of the AFL-CIO that performs coordinative, lobbying and “research” (i.e., public-relations) functions. Unions can achieve higher wages for their members only by affecting either the supply of labor or the demand for it. There is precious little they can do to affect the demand for labor, which comes from businesses, not unions. Unions can affect the supply of labor only by reducing it, which they do in various ways. This causes unemployment, which in turn exerts continuous public-relations pressure on unions to support “job creation” measures. But true job creation can come only from the combination of consumer demand and labor productivity, which underlie the economic concept of marginal value productivity of labor.

In the jargon of economics, all these organizations are rent-seekers that seek benefits unobtainable in the marketplace. They represent their members in their capacities as producers or input suppliers, not in their capacities as consumers. In other words, rent-seekers and the op-eds they write structure their pleas for “reform” to raise the prices of goods and inputs supplied by their member/constituents and/or provide jobs to them. Virtually all the op-eds appearing in print are written by rent-seekers striving to shape pseudo-reforms in ways that suit their particular interests.

In the Engler-McGarvey case, there are two possibilities. Possibility number 1: The Federal Permitting Improvement Act actually passes Congress and actually achieves the incremental improvement promised. In this wildly unlikely case, Mr. Engler’s business clients benefit from the modest reduction in permitting times. Since the entire wage and hiring process for infrastructure processes – government or otherwise – is grossly biased in favor of union labor, Mr. McGarvey’s clients benefit as well. Possiblity 2: As the above Summary suggests, the likelihood of actual incremental improvement is infinitesimal even if the legislation were to pass, since it requires efficient behavior by the same government bureaucracy that has caused the problems requiring reform in the first place. So the chances are that the result of the reform proposal will be nil.

As far as you and I are concerned, this represents a colossal waste of time and money. But for Messrs. Engler and McGarvey, this is not so. They are creatures of government. The next-best alternative to positive benefits for their client-constituents is no change in the status quo. For Mr. Engler, the status quo gives the biggest companies big advantages over smaller competitors. For Mr. McGarvey, the status quo gives unions and union labor big advantages that they cannot begin to earn in the competitive marketplace. Unions have been losing market share steadily in the private sector for many years. But they have been gaining influence and membership in the government sector, which is ruled by legislation and lobbyists.

Op-eds and reform proposals like this one allow people like Mr. Engler and Mr. McGarvey to earn their lucrative salaries as lobbyist and union president/lobbyist, respectively, by sponsoring and promoting pseudo-reform policies whose effects on their client-constituents can be characterized as “heads we win, tails we break even.”

But what about the effects on the rest of us?

What Would Real Reform Require – and Why Don’t We Get It?

A fundamental insight of economics – we might even call it THE fundamental insight – is that consumption is the end-in-view behind all economic activity. All of us are consumers. But this very fact works against us in the realm of big government, because this diffuses the monetary stake each one of us has in any one particular issue as a consumer. A tax on an imported good will raise its price, which rates to be a bad thing for millions of Americans. But because that good forms only a small part of the total consumption of each person, the money it costs him or her will be small. The cost will not be enough to motivate him or her to organize politically against the tax. On the other hand, a worker threatened with losing his or her job to the competition posed by the imported good may have a very large sum of money at stake – or may believe that to be true. The same is true for owners of domestic import-competing firms. Consequently, there are many lobbyists for legislation against imports and almost no lobbyists in favor of free, untaxed international trade. Yet economists know that free international trade will create more happiness, more overall goods and services and almost certainly more jobs than will international trade that is limited by taxes and quotas.

This explains why so many op-ed writers are rent-seekers and so few argue in favor of economic efficiency. True reform of government would not focus on the aims of rent-seekers. It would not strive to preserve the artificial advantages currently enjoyed by large companies – neither, for that matter, would it seek to preserve the presence of small companies merely for their own sake. True reform would allow businesses to perform their inherent function; namely, to produce the goods and services that consumers value the most. The only way to effect that reform is to remove the artificial influence of government from markets and confine government to its inherent limited role in preventing fraud and coercion.

Based on this evaluation, we might expect to see economists writing op-eds opposing the views of rent-seekers. Instead, this happens only occasionally. Economists are just as keenly attuned to their self-interest as other people. Most economists are employed by government, either directly as government employees or indirectly as teachers in public universities or fellows in research institutions funded by government. At best, these economists will favor the status quo rather than true reform. Only the tiny remnant of economists who work outside government for free-market oriented research organizations can be relied upon to support true reform.

Incremental Reform Vs. Structural Reform 

Incremental reforms are sponsored by rent-seekers. They are designed either to fail or, if they succeed, to yield rents to special interests instead of real reform. Real reform must be pro-consumer in nature. But the costs of organizing consumers are vast. In order to mobilize a reform of that scale, it must offer benefits that are just as vast or greater in size and scope. That means that true reform must be structural rather than incremental. It cannot merely preserve the status quo; it must overturn it.

In other words, true reform must be revolutionary. This does not imply that it must be violent. The reform that overturned Soviet Communism, perhaps the most powerful totalitarian dictatorship in human history, was almost completely non-violent. Admittedly, it had outside help from the international community in the political and moral form from people like Lech Walesa, Pope John, British Prime Minister Margaret Thatcher and, most of all, President Ronald Reagan.

As the efforts of the Tea Party have recently demonstrated, pro-consumer reform cannot be “organized” in the mechanistic sense. It can only arise spontaneously because that is the least costly way – and therefore the only feasible way – to achieve it.

We are unlikely to read about such a reform in the public prints because most of them are owned or sponsored by people who have vested interests in big government. These interests are usually financial but may sometimes be purely ideological. Big government may be a means of suppressing competition. It may be a means of subsidizing their enterprise. It may be a means of providing a bailout when digital competition becomes too fierce. In any event, we cannot look to the op-ed pages for leadership of real government reform.

DRI-168 for week of 5-17-15: Who Killed the Amtrak 8?

An Access Advertising EconBrief: 

Who Killed the Amtrak 8?

At 9:21 PM on Tuesday, May 12, 2015, Amtrak Northeast Regional passenger train 188 was proceeding northeast en route from Washington, D.C. to New York City. Specifically, it was traveling through Philadelphia a few miles north of 30th Street station in an area called Frankford Junction. Passing through a short stretch of eastbound track, it came to a fairly sharp northeast curve. The speed limit for a train entering the curve was 50 mph. According to the train’s “black box,” or data recorder, it was traveling at 106 mph as it entered the curve. The train’s engineer apparently applied emergency brakes immediately after reaching the curve, but a few seconds later – the point at which the data recorder stopped receiving data – the train had slowed only to 102 mph.

The reason the data recorder ceased operations was that the train derailed at that point. Seven people were killed at the crash site and one died subsequently; around thirty others were hospitalized with injuries of varying severity. The dead included the CEO of a technology firm and a naval-academy midshipman.

Reactions were predictable. Philadelphia Mayor Michael Nutter solemnly, somberly lamented the tragedy. Federal-government regulators stressed the desirability of transportation safety and passage of time necessary to decelerate a train. And, most predictable of all, politicians and political commentators placed blame on their political opponents.

Murdering Republicans Strike Again

An activist liberal policy group called “Agenda Project Action Fund” made a video giving their version of the events leading up to and including the derailment. It was titled “Republican Cuts Kill Again.” The “cuts” referred to were budget cuts by the U.S. Congress, a majority of whose members are currently Republican. An author named Josh Israel of “Think Progress” wrote an article titled “Currently Available Technology May Have Prevented Fatal Amtrak Crash, But Congress Never Funded It.” Politico.com chipped in with the headline “House panel votes to cut Amtrak budget hours after deadly crash.” Rep. Nita Lowey sententiously volunteered that “starving rail of funding will not enable safer train travel” – without, of course, mentioning that the combined federal and state Amtrak budgets have increased every year since 2008.

The immediate questions that arise are: What is this “currently available technology?” Why didn’t Congress fund it? But that doesn’t begin to exhaust the relevant sources of curiosity. How long has the technology been available? Why is Congressional funding even an issue in the first place, since Amtrak is nominally a for-profit corporation? Most importantly of all, what is the optimal framework for providing transportation services in general and passenger-rail services in particular – and how does Amtrak fit into that framework?

What is Amtrak and Why are People Saying These Terrible Things About It?

“Amtrak” is a hybrid name for the National Railroad Passenger Corporation. It is one of those centaur-like organizations common to modern big government – a nominally for-profit corporation that is nevertheless publicly funded. It receives annual appropriations from the federal government that have averaged around $1.4 billion in recent years. It also receives annual funding from various state-level sources, particularly about 14 state governments and the three largest Canadian provinces.

Amtrak serves 46 U.S. states and those three Canadian provinces. But the bulk of its business is provided in what is called the “Northeast Corridor” of the U.S. Although Amtrak’s routes comprise over 500 destinations, more than two-thirds of its nearly 31 million passengers come from the ten largest metropolitan areas in the U.S.; 83% travel routes of less than 400 miles.

Amtrak began operations on May 1, 1971. Today it runs over 300 trains per day across 21,000 miles of track. It has over $2 billion in annual revenue. But it has yet to turn a profit. It has always required subsidies. During the Reagan administration, these subsidies hit an annual low of $600 million before rising again subsequently. They have waxed and waned, but state-level subsidies have recently tended to compensate for cuts at the federal level. Government has also provided capital subsidies for investment; this explains why the left wing can call for Congress to fund safety improvements.

Although Congress provided limited authorization for Amtrak to deviate from labor-union agreements in the late 1990s, Amtrak has long employed union labor. It negotiates with 14 separate unions and has 24 separate agreements with those unions. For many decades under federal regulation by the ICC, the railroad business was a classic case of “featherbedding,” or the employment of superfluous workers in union-protected jobs. This remains true today with Amtrak. It is no coincidence that Amtrak’s national headquarters is Washington, D.C.

Amtrak is a lightning rod for political controversy. The left wing loves it because mass transit is a sacred cow of both the old left and environmentalists. The fact that Amtrak is horrendously inefficient is politically advantageous to the left because it means that it employs more labor than necessary to produce a given output – the very thing that outrages any competent economist delights the left wing.

It is true that left-wingers cite cost comparisons claiming that train travel is the most efficient form of passenger transportation. Unfortunately for their argument, the comparisons are bogus. They use “on-time” as a criterion for comparing airlines and trains while rigging the definitions to allow trains absurd margins of lateness. Even more telling is the fact that they completely ignore the element of consumer demand. The reason most people do not ride trains is the same reason that they prefer to drive automobiles – cars provide point-to-point transportation and maximum personal convenience. This economizes on the value of an individual’s time. Since we are all mortal and have limited hours in the day and in our lifetime, this is a vast benefit to us. But this is completely ignored in the cost comparisons claiming superiority for train travel. When economists conduct the comparisons and account for human time preference, this claimed superiority for mass transit vanishes.

The right wing hates Amtrak, but that doesn’t mean that it is unpopular with Republicans. Amtrak is popular in the most populous parts of the country, which means that most of the geographic U.S. (but a minority of the population) is subsidizing a relatively small part of the country (but a majority of the population). Consequently, Republicans – most of whom have the political backbone of invertebrates – tend to support subsidies. (This is particularly true in the House of Representatives, which is based on population.) How else have they continued, year after year after year? Instead of cutting Amtrak loose or voting for privatization, Republicans content themselves with rhetorical volleys against it and cosmetic measures designed to “make it work better.”

The “Currently Available Technology”

The “currently available technology” referred to by the liberal activist group is the Positive Train Control (PTC) system. It uses a combination of radio signals and GPS technology to pinpoint the position of all trains. Not only can slow speeding trains, it can also prevent collisions between trains, prevent trains from proceeding through wrongly positioned switches and prevent trains from entering work zones. In other words, PTC is an all- (or at least, multi-)purpose train safety system.

In 2008, Amtrak suffered a derailment in California with loss of life. Senator Dianne Feinstein (D-Cal), one of the most powerful Senate Democrats, did not let this crisis go to waste. She seized the chance to push through legislation mandating full installation of PTS for both passenger and freight railroad systems in the U.S. by 2015.

So what, many readers are doubtless thinking to themselves? Isn’t that the way the system is supposed to work? Isn’t this a victory for big government, the regulatory state?

Not hardly. Just the opposite, in fact.

Why PTS Is DOA

To an economist, the first thing that pops into mind is the question: If PTC is the greatest thing since sliced bread, why does Congress have to mandate its adoption? After all, freight railroads have been an extremely successful industry for years. Those ads touting their success in squeezing efficiency from train fuel are not hyperbole. Warren Buffett didn’t buy Burlington Northern because he thought its management was brain-dead. Why in the world wouldn’t the industry rush to adopt PTC if it were the last word in safety, since safety is vital to any successful freight operation?

The answer to that question was provided by the Reason Foundation. Thanks to the expertise of its founder, Robert Poole, the Foundation has long been recognized as the ranking expert in transportation. Policy analyst Baruch Feigenbaum gave his readers the lowdown on PTC.

The Federal Railroad Administration performed a cost-benefit study on PTC technology. It found a projected benefit range (discounted present value) of $0-$400 million. But the cost was $13 billion. Whoops. That means that for every $1 of (maximum) benefit, it cost $20 to install.

But because Congress, in its infinite wisdom, forced both passenger and freight railroads to install it, the entire railroad business has been laboriously slaving away at it for the last seven years. Of course, nobody is too crazy about throwing money down this rathole. The use of radio signals requires coordination with the FCC, and Amtrak, which can’t even coordinate with itself well enough to make a profit, is finding that difficult. Then there are the various regulatory hurdles. Yes, that’s right – the same government which has legislatively mandated the adoption of PTC is throwing up regulatory hurdles to it in the form of environmental and historic-preservation review for each of the 20,000 required communications antennas in the system. This has led to a year-long moratorium on installation, according to Association of American Railroads’ CEO Edward Hamberger in a Wall Street Journal op-ed.

But..uh, well, at least PTC is better than nothing, right? Wouldn’t we be stuck with no safety at all if we hadn’t passed that unbelievably dumb, wildly wasteful law? Apparently that’s what the political left wants us to believe; that is its intellectual fall-back position when confronted with the facts about PTC. Presumably, that is as far as the average person’s thinking goes on the subject.

But the inherent meaning of cost-benefit analysis is that “cost” refers to foregone alternatives. When cost exceeds benefit, there are better, more beneficial ways of spending the money than on the project being analyzed because the foregone alternatives represent benefits available elsewhere.

And in this particular case, some of those benefits are alternative safety projects within the railroad industry itself.

ATC – A Better, Lower-Cost Alternative

Both Feigenbaum and Hamberger describe another type of railroad safety technology now in use. Amtrak and freight railroads currently utilize a type of safety technology that relates specifically to speeding trains. It is called “Automatic Train Control” (ATC). It is installed on the tracks and sends signals to trains telling them what the speed limit is, allowing the train to automatically slow itself before reaching the speed-change point. In short, it is a mechanism for eliminating the particular type of human (engineer) error apparently responsible for the Philadelphia derailment. It would have prevented the Philadelphia accident.

It is quite true that ATC handles only this particular type of error; it lacks the all-encompassing scope of PTC. But ATC has the advantage of being relatively cheap and easy to install. We know this because after the recent Philadelphia derailment, Amtrak quietly installed ATC on the section of track where the accident occurred. It accomplished the installation in one weekend.

Nor is this the only type of alternative safety improvement to ponder. Marc Scribner of the Competitive Enterprise Institute recently noted that about 270 people die every year in accidents at train crossings. Why not take some of that $13 billion and devote it instead to improving crossing safety in various low-cost ways, thereby saving dozens of lives every year instead of 8-10 lives every 7 years or so?

Both Amtrak and private freight railroads have installed ATC. Why haven’t they completed that installation? Well, they both labor under the burden of meeting mandatory legal deadlines for which they will eventually be fined when 2015 expires without completion of the PTC system. Hamberger estimates 2018 as the PTC completion date, with another two years necessary for “testing and validation.”

Who Killed the Amtrak 8? 

Given the facts as outlined above, it is obvious who killed the Amtrak 8. Big government and the regulatory state killed them. Even Amtrak might have had the corporate brains to install ATC throughout the Northeast Corridor – by far its biggest revenue generator and arguably profitable in its own right – were it not faced with the overwhelming burden of having to install PTC.

This verdict is seconded by Wall Street Journal columnist Holman Jenkins in his latest column (WSJ, 05/20/2015, “How Congress Railroaded the Railroads”). “Is there a more absurd technology than positive train control, which Congress imposed as an unfunded mandate on railroads in 2008, and which supposedly would have prevented last week’s Philly Amtrak crash? Except it didn’t since its implementation has been draggy and its design so clearly inferior to cheaper, faster, more up-to-date solutions.”

Even beyond this, though, is the decisive point relating to the fate of passenger rail had big government not established and continually sustained Amtrak in the first place.

A World Without Amtrak

When Lyndon Johnson succeeded John F. Kennedy in the White House, he recognized that Kennedy’s assassination had created an extraordinary mandate for change. Johnson was perhaps the century’s premier legislative spearhead, and he essentially created the regulatory welfare state that presides over the country today. Johnson predicted that it would take 50 years to determine the success or failure of his “experiment” in social policy. A half-century later, we can deliver the verdict that the welfare state is imploding not just in the U.S., but worldwide.

Similarly, forty-four years should be sufficient to pronounce Amtrak a failure. Its infrastructure is ramshackle, its finances are a mess and its organization is a shambles. Amtrak’s only positive feature is a core constituency that leaves open the possibility of a profitable passenger rail service. That constituency, in the “Northeast Corridor” of America, boasts a population density roughly ten times greater than the rest of the U.S. This makes it possible for a for-profit, private-sector business to identify and isolate this customer base. In no sense is passenger-rail service a “public good” in the classical economic sense; it is neither non-exclusive nor non-rival.

Thus, the obvious solution to the problems plaguing Amtrak, of which safety is merely the one occupying front-pages currently, is to end its public subsidies, acknowledge its bankruptcy and sell off its assets. This includes its rights of way, which would enable a privatized successor to operate passenger rail for the benefit of the large number of people in the relatively confined area where that business is economically feasible.

To be fair, it should be noted that some are skeptical of privatization not on principle but as a practical matter. Like Holman Jenkins of The Wall Street Journal, they think the profits of the Northeast Corridor are overestimated and costs of service underestimated. Variable costs should take into account incremental wear and tear on infrastructure, which are now obscured by capital subsidies. Congress has given Amtrak preferential right-of-way over freight traffic on lines owned by the freight railroads – another implicit subsidy that would vanish under privatization. Various regional commuter transporters now tacitly agree not to compete with Amtrak, which is still another hidden subsidy. Could a privatized rail carrier still serve the Northeast Corridor without these subsidies? The only way to know is to try it and see.

To be workable, privatization would demand relief from the killing mandate currently crippling Amtrak and greatly hindering freight railroads – namely, the 2008 law mandating the adoption of the already-obsolete and dreadfully expensive PTC. This would save hundreds, if not thousands of lives, and improve life for millions of people. The only losers would be regulators, politicians and, possibly, union members who would lose jobs and be forced to take lower-paying ones. The union members could be bought off through severance. The others would simply have to eat their losses. In fact, this is increasingly the choice that confronts us not merely in passenger rail but in the entire transportation system.

As things stand today, the American transportation system is a massive form of human sacrifice to the gods of government regulation and unionization. Tens of thousands of Americans lose their lives every year so that government regulators and union members can hold their jobs and earn more money than would otherwise be the case.

Let us hear from Holman Jenkins again: “Which brings us to another headline from the brave new world of self-driving vehicles. This month the truck maker Freightliner introduced a robotically controlled truck, licensed to operate on the roads of Nevada. Its onboard system, designed to relieve drivers of the monotony of motoring for hours down calm stretches of well-marked interstate, ‘never gets tired. It never gets distracted. It’s always at 100%,’ company executive Wolfgang Bernhard told the media.”

“Alas, Mr. Bernhard deflated expectations by predicting that, though the system is ready to roll today, deployment is likely five years off. ‘The biggest obstacle that we see is the regulatory framework'” [emphasis added].

“Five years may be optimistic: An unspoken burden for the future is the legacy of the Toyota travesty of 2010, in which congressmen and, most damningly, a head of the Transportation Department, whose agency knows better, preferred to allege an undetected electronic bug in Toyotas rather than acknowledge that drivers (i.e., voters) cause accidents by pressing the gas instead of the brake.”

“This scandal, hugely costly to Toyota and largely fabricated, has never been acknowledged or investigated by the government of the media…One big inconvenient precedent lies in its wake. As Toyota found, because it’s impossible to prove the nonexistence of a software bug, anytime there’s an accident involving a system in which software plays a role, the software will be blamed and the driver will be excused. Perhaps the only way forward, then, is to remove the driver altogether” [emphasis added].

Whether it is cars, planes or trains, the dirty little secret that nobody is willing to talk about is the driver – the source of almost all the deaths and injuries. Here we have a train traveling at 106 mph in a 50 mph zone and an engineer with a case of amnesia. Sure, there was a dent in the windshield and talk of a projectile. But the dent didn’t penetrate the windshield and there is no logical explanation for how a projectile would cause the train’s speed to double. Is a left-wing lawyer going to emerge claiming that the train’s engine was manufactured by Toyota? Or are we eventually going to wind up with “driver error” as the cause of the derailment? Once again, with trains as with planes and cars, self-driving is the ultimate way forward.

Holman Jenkins is now acknowledging what this space declared over two years ago with respect to self-driving cars and almost a year ago with respect to commercial aviation. Now the same chickens are roosting on the tracks of passenger rail. Big government and regulators are standing athwart technology and yelling “Stop!” while over 30,000 people are killed every year on the nation’s roads, hundreds die in each commercial air crash and hundreds more die annually in various forms of railroad accident.

Up to now, none dare call it murder. Yet Democrats get away with accusing Republicans of murder for the sin of holding a Congressional majority.

DRI-211 for week of 3-29-15: Which First – Self-Driving Cars or Self-Flying Planes?

An Access Advertising EconBrief: 

Which First – Self-Driving Cars or Self-Flying Planes?

As details of the grisly demise of Lufthansa’s Germanwings flight 9525 gradually emerged, the truth became inescapable. The airliner had descended 10,000 feet in a quick but controlled manner, not the dead drop or death spiral of a disabled plane. No distress calls were sent. It became clear that the airplane had been deliberately steered into a mountainside. The recovery of the plane’s flight data recorder – the “black box” – provided the anticlimactic evidence of a mass murder wrapped around an apparent suicide: the sound of a chair scraping the floor as the flight crew’s captain excused himself from the cabin, followed by the sound of the cabin door closing, followed by the steady breathing of the co-pilot until the captain’s return. The sounds of the captain’s knocks and increasingly frantic demands to be readmitted to the cabin were finally accompanied by the last-minute screams and shrieks of the passengers as they saw the French Alps looming up before them.

The steady breathing inside the cabin showed that the copilot remained awake until the crash.

As we would expect, the reaction of the airline, Lufthansa, and government officials is now one of shock and disbelief. Brice Robin, Marseille public prosecutor, was asked if the copilot, Andreas Lubitz, had – to all intents and purposes – committed suicide. “I haven’t used the word suicide,” Robin demurred, while acknowledging the validity of the question. Carsten Spohr, Lufthansa’s CEO and himself a former pilot, begged to differ: “If a person takes 149 other people to their deaths with him, there is another word than suicide.” The obvious implication was that the other people were innocent bystanders, making this an act of mass murder that dwarfed the significance of the suicide.

This particular mass murder caught the news media off guard. We are inured to the customary form of mass murder, committed by a lone killer with handgun or rifle. He is using murder and the occasion of his death to attain the sense of personal empowerment he never realized in life. The news media reacts in stylized fashion with pious moralizing and calls for more and stronger laws against whatever weapon the killer happened to be using.

In the case of the airline industry, the last spasm of government regulation is still fresh in all our minds. It followed in response to the mass murder of 3,000 people on September 11, 2001 when terrorists hijacked commercial airliners and crashed them into the World Trade Center and the Pentagon. Regulation has marred airline travel with the pain of searches, scans, delays and tedium. Beyond that, the cabins of airliners have been hardened to make them impenetrable from the outside – in order to provide absolute security against another deliberately managed crash by madmen.

Oops. What about the madmen within?

But, after a few days of stunned disbelief, the chorus found its voice again. That voice sounded like Strother Martin’s in the movie Cool Hand Luke. What we have here is a failure to regulate. We’ll simply have to find a way to regulate the mental health of pilots. Obviously, the private sector is failing in its clear duty to protect the public, so government will have to step in.

Now if it were really possible for government to regulate mental health, wouldn’t the first priority be to regulate the mental health of politicians? Followed closely by bureaucrats? The likely annual deaths attributable to government run to six figures, far beyond any mayhem suicidal airline pilots might cause. Asking government to regulate the mental health of others is a little like giving the job to the inmates of a psychiatric hospital – perhaps on the theory that only somebody with mental illness can recognize and treat it in others.

Is this all we can muster in the face of this bizarre tragedy? No, tragedy sometimes gives us license to say things that wouldn’t resonate at other times. Now is the time to reorganize our system of air-traffic control, making it not only safer but better, faster and cheaper as well.

The Risk of Airline Travel Today: The State of the Art

Wall Street Journal Holman Jenkins goes straight to the heart of the matter in his recent column (03/29-29/2015, “Germanwings 9525 and the Future of Flight Safety”). The apparent mass-murder-by-pilot “highlights one way the technology has failed to advance as it should have.” Even though the commercial airline cockpit is “the most automated workplace in the world,” the sad fact is that “we are further along in planning for the autonomous car than for the autonomous airliner.”

How has the self-flying plane become not merely a theoretical possibility but a practical imperative? What stands in the way of its realization?

The answer to the first question lies in comparing the antiquated status quo in airline traffic control with the potential inherent in a system updated to current technological standards. The second answer lies in the recognition of the incentives posed by political economy.

Today’s “Horse and Buggy” System of Air-Traffic Control

For almost a century, air-traffic control throughout the world has operated under a “corridor system.” This has been accurately compared to the system of roads and lanes that governs vehicle transport on land, the obvious difference being that it incorporates additional vertical dimensions not present in the latter. Planes file flight plans that notify air-traffic controllers of their origin and ultimate destination. The planes are required to travel within specified flight corridors that are analogous to the lanes of a roadway. Controllers enforce distance limits between each plane, analogous to the “car-lengths” distance between the cars on roadways. Controllers regulate the order and sequence of takeoffs and landings at airports to prevent collisions.

Unfortunately, the corridor system is pockmarked with gross inefficiencies. Rather than being organized purely by function, it is instead governed primarily by political jurisdiction. This is jarringly evident in Europe, home to many countries in close physical proximity. An airline flight from one end of Europe to another may pass through dozens of different political jurisdictions, each time undergoing a “handoff” of radio contact for air-traffic control between plane and ground control.

In the U.S., centralized administration by the Federal Aviation Administration (FAA) surmounts some of this difficulty, but the antiquated reliance on radar for geographic positioning still demands that commercial aircraft report their positions periodically for handoff to a new air-traffic control boss. And the air corridors in the U.S. are little changed from the dawn of air-mail delivery in the 1920s and 30s, when hillside beacons provided vital navigational aids to pilots. Instead of regular, geometric air corridors, we have irregular, zigzag patterns that cause built-in delays in travel and waste of fuel. Meanwhile, the slightest glitch in weather or airport procedure can stack up planes on the ground or in the air and lead to rolling delays and mounting frustration among passengers.

Why Didn’t Airline Deregulation Solve or Ameliorate These Problems? 

Throughout the 20th century, the demand for airline travel grew like Topsy. But the system of air-traffic control remained antiquated. The only way that system could adjust to increased demand was by building more airports and hiring more air-traffic controllers. Building airports was complicated because major airports were constructed with public funds, not private investment. The rights-of-way, land acquisition costs, and advantages of sovereign immunity all militated against privatization. When air-traffic controllers became unionized, this guaranteed that the union would strive to restrict union membership in order to raise wages. This, too, made it difficult to cope with increases in passenger demand.

The deregulation of commercial airline entry and pricing that began in 1978 was an enormous boon to consumers. It ushered in a boom in airline travel. Paradoxically, this worsened the quality of the product consumers were offered because the federal government retained control over airline safety. This guaranteed that airport capacity and air-safety technology would not increase pari passu with consumer demand for airline travel. As Holman Jenkins puts it, the U.S. air-traffic-control system is “a government-run monopoly, astonishingly slow to upgrade its technology.” He cites the view of the leading expert on government regulation of transportation, Robert Poole of the Reason Foundation, that the system operates “as if Congress is its main customer.”

Private, profit-maximizing airlines have every incentive to insure the safe operation of their planes and the timely provision of service. Product quality is just as important to consumers as the price paid for service; indeed, it may well be more important. History shows that airline crashes have highly adverse effects on the business of the companies affected. At the margin, an airline that offers a lower price for a given flight or provides safer transportation to its customers or gives its customers less aggravation during their trip rates to make more money through its actions.

In contrast, government regulators have no occupational incentive to improve airline safety. To be sure, they have an incentive to regulate – hire staff, pass rules, impose directives and generally look as busy as possible in their everyday operations. When a crash occurs, they have a strong incentive to assume a grave demeanor, rush investigators to the scene, issue daily updates on results of investigations and eventually issue reports. These activities are the kinds of things that increase regulatory staffs and budgets, which in turn increase salaries of bureaucrats. They serve the public-relations interests of Congress, which controls regulatory budgets. But government regulators have no marginal incentive whatsoever to reduce the incidence of crashes or flight delays or passenger inconvenience – their bureaucratic compensation is not increased by improved productivity in these areas despite the fact that THIS IS REALLY WHAT WE WANT GOVERNMENT TO DO.

Thus, government regulators really have no incentive to modernize the air-traffic control system. And guess what? They haven’t done it; nor have they modernized the operation of airports. Indeed, the current system meets the needs of government well. It guarantees that accidents will continue to happen – this will continue to require investigation by government, thus providing a rationale for the FAA’s accident-investigation apparatus. Consumers will continue to complain about delays and airline misbehavior – this will require a government bureau to handle complaints and pretend to rectify mistakes made by airlines. And results of accident investigations will continue to show that something went wrong – after all, that is the definition of an accident, isn’t it? Well, the FAA’s job is to pretend to put that something right, whatever it might be.

The FAA and the Federal Transportation Safety Board (FTSB) are delighted with the status quo – it justifies their current existence. The last thing they want is a transition to a new, more efficient system that would eliminate accidents, errors and mistakes. That would weaken the rationale for big government. It would threaten the rationale for their jobs and their salaries.

Is there such a system on the horizon? Yes, there is.

Free Flight and the Future of Fully Automatic Airline Travel

A 09/06/2014 article in The Economist (“Free Flight”) is subtitled “As more aircraft take to the sky, new technology will allow pilots to pick their own routes but still avoid each other.” The article describes the activities of a Spanish technology company, Indra, involved in training a new breed of air-traffic controllers. The controllers do not shepherd planes to their destinations like leashed animals. Instead, they merely supervise autonomous pilots to make sure that their decisions harmonize with each other. The controllers are analogous to the auctioneers in the general equilibrium models of pricing developed by the 19th century economist Vilfredo Pareto.

The basic concept of free flight is that the pilot submits a flight plan allowing him or her to fly directly from origin to destination, without having to queue up in a travel corridor behind other planes and travel the comparatively indirect route dictated by the air-traffic control system. This allows closer spacing of planes in the air. Upon arrival, it also allows “continuous descent” rather than the more circuitous approach method that is now standard. This saves both time and fuel. For the European system, the average time saved has been estimated at ten minutes per flight. For the U. S., this would undoubtedly be greater. Translated into fuel, this would be a huge saving. For those concerned about the carbon dioxide emissions of airliners, this would be a boon.

The obvious question is: How are collisions to be avoided under the system of free flight? Technology provides the answer. Flight plans are submitted no less than 25 minutes in advance. Today’s high-speed computing power allows reconciliation of conflicts and any necessary adjustments in flight-paths to be made prior to takeoff. “Pilots” need only stick to their flight plan.

Streamlining of flight paths is only the beginning of the benefits of free flight. Technology now exists to replace the current system of radar and radio positioning of flights with satellite navigation. This would enable the exact positioning of a flight by controllers at a given moment. The European air-traffic control system is set to transition to satellite navigation by 2017; the U.S. system by 2020.

The upshot of all these advances is that the travel delays that currently have the public up in arms would be gone under the free flight system. It is estimated that the average error in flight arrivals would be no more than one minute.

Why must we wait another five years to reap the gains from a technology so manifestly beneficial? Older readers may recall the series of commercials in which Orson Welles promoted a wine with the slogan “We sell no wine before its time.” The motto of government regulation should be “we save no life before its time.”

The combination of free flight and satellite navigation is incredibly potent. As Jenkins notes, “the networking technology required to make [free flight] work [lends] itself naturally and almost inevitably to computerized aircraft controllable from the ground.” In other words, the human piloting of commercial aircraft has become obsolete – and has been so for years. The only thing standing between us and self-flying airliners has been the open opposition of commercial pilots and their union and the tacit opposition of the regulatory bureaucracy.

Virtually all airline crashes that occur now are the result of human error – or human deliberation. The publication Aviation Safety Network listed 8 crashes since 1994 that are believed to have been deliberately caused by the pilot. The fatalities involved were (in ascending order) 1, 1, 4, 12, 33, 44, 104 and 217. Three cases involved military planes stolen and crashed by unstable pilots, but of the rest, four were commercial flights whose pilots or copilots managed to crash their plane and take the passengers with them.

Jenkins resurrects the case of a Japanese pilot who crashed hid DC-8 into Tokyo Bay in 1982. He cites the case of the Air Force pilot who crashed his A-10 into a Colorado mountain in 1997. He states what so far nobody else has been willing to say, namely that “last March’s disappearance of Malaysia Airlines 370 appears to have been a criminal act by a member of the crew, though no wreckage has been recovered.”

The possibility of human error and human criminal actions is eliminated when the human element is removed. That is the clincher – if one were needed – in the case for free flight to replace our present antiquated system of air-traffic organization and control.

The case for free flight is analogous to the case for free markets and against the system of central planning and government regulation.

What if… 

Holman Jenkins reveals that as long ago as 1993 (!) no less a personage than Al Gore (!!) unveiled a proposal to partially privatize the air-traffic control system. This would have paved the way for free flight and automation to take over. As Jenkins observes retrospectively, “there likely would have been no 9/11. There would have been no Helios 522, which ran out of fuel and crashed in 2005 when its crew was incapacitated. There would have been no MH 370, no Germanwings 9525.” He is omitting the spillover effects on private aviation, such as the accident that claimed the life of golfer Payne Stewart.

The biggest “what if” of all is the effect on self-driving cars. Jenkins may be the most prominent skeptic about the feasibility – both technical and economic – of autonomous vehicles in the near term. But he is honest enough to acknowledge the truth. “Today we’d have decades of experience with autonomous planes to inform our thinking about autonomous cars. And disasters like the intentional crashing of the Germanwings plane would be hard to conceive of.”

What actually happened was that Gore’s proposal was poured through the legislative and regulatory cheesecloth. What emerged was funding to “study” it within the FAA – a guaranteed ticket to the cemetery. As long as commercial demand for air travel was increasing, pressure on the agency to do something about travel delays and the strain on airport capacity kept the idea alive. But after 9/11, the volume of air travel plummeted for years and the FAA was able to keep the lid on reform by patching up the aging, rickety structure.

And pilots continued to err. On very, very rare occasions, they continued to murder. Passengers continued to die. The air-traveling public continued to fume about delays. As always, they continued to blame the airlines instead of placing blame where it belonged – on the federal government. Now air travel is projected to more-than-double by 2030. How long will we continue to indulge the fantasy of government regulation as protector and savior?

Free markets solve problems because their participants can only achieve their aims by solving the problems of their customers. Governments perpetuate problems because the aims of politicians, bureaucrats and government employees are served by the existence of problems, not by their solution.

DRI-191 for week of 3-15-15: More Ghastly than Beheadings! More Dangerous than Nuclear Proliferation! Its…Cheap Foreign Steel!

An Access Advertising EconBrief:

More Ghastly than Beheadings! More Dangerous than Nuclear Proliferation! Its…Cheap Foreign Steel!

The economic way to view news is as a product called information. Its value is enhanced by adding qualities that make it more desirable. One of these is danger. Humans react to threats and instinctively weigh the threat-potential of any problematic situation. That is why headlines of print newspapers, radio-news updates, TV evening-news broadcasts and Internet websites and blogs all focus disproportionately on dangers.

This obsession with danger does not jibe with the fact that human life expectancy had doubled over the last century and that violence has never been less threatening to mankind than today. Why do we suffer this cognitive dissonance? Our advanced state of knowledge allows us to identify and categorize threats that passed unrecognized for centuries. Today’s degraded journalistic product, more poorly written, edited and produced than formerly, plays on our neuroscientific weaknesses.

Economists are acutely sensitive to this phenomenon. Our profession made its bones by exposing the bogey of “the evil other” – foreign trade, foreign goods, foreign labor and foreign investment as ipso facto evil and threatening. Yet in spite of the best efforts of economists from Adam Smith to Milton Friedman, there is no more dependable pejorative than “foreign” in public discourse. (The word “racist” is a contender for the title, but overuse has triggered a backlash among the public.)

Thus, we shouldn’t be surprised by this headline in The Wall Street Journal: “Ire Rises at China Over Glut of Steel” (03/16/2015, By Biman Mukerji in Hong Kong, John W. Miller in Pittsburgh and Chuin-Wei Yap in Beijing). Surprised, no; outraged, yes.

The Big Scare 

The alleged facts of the article seem deceptively straightforward. “China produces as much steel as the rest of the world combined – more than four times as much as the peak U.S. production in the 1970s.” Well, inasmuch as (a) the purpose of all economic activity is to produce goods for consumption; and (b) steel is a key input in producing countless consumption goods and capital goods, ranging from vehicles to buildings to weapons to cutlery to parts, this would seem to be cause for celebration rather than condemnation. Unfortunately…

“China’s massive steel-making engine, determined to keep humming as growth cools at home, is flooding the world with exports, spurring steel producers around the globe to seek government protection from falling prices. From the European Union to Korea and India, China’s excess metal supply is upending trade patterns and heating up turf battles among local steelmakers. In the U.S., the world’s second-biggest steel consumer, a fresh wave of layoffs is fueling appeals for tariffs. U.S. steel producers such as U.S. Steel Corp. and Nucor Corp. are starting to seek political support for trade action.”

Hmmm. Since this article occupies the place of honor on the world’s foremost financial publication, we expect it to be authoritative. China has a “massive steel-making engine” – well, that stands to reason, since it’s turning out as much steel as everybody else put together. It is “determined to keep humming.” The article’s three (!) authors characterize the Chinese steelmaking establishment as a machine, which seems apropos. They then endow the metaphoric machine with the human quality of determination – bad writing comes naturally to poor journalists.

This determination is linked with “cooling” growth. Well, the only cooling growth that Journal readers can be expected to infer at this point is the slowing of the Chinese government’s official rate of annual GDP growth from 7.5% to 7%. Leaving aside the fact that the rest of the industrialized world is pining for growth of this magnitude, the authors are not only mixing their metaphors but mixing their markets as well. The only growth directly relevant to the points raised here – exports by the Chinese and imports by the rest of the world – is growth in the steel market specifically. The status of the Chinese steel market is hardly common knowledge to the general public. (Later, the authors eventually get around to the steel market itself.)

So the determined machine is reacting to cooling growth by “flooding the world with exports,” throwing said world into turmoil. The authors don’t treat this as any sort of anomaly, so we’re apparently expected to nod our heads grimly at this unfolding danger. But why? What is credible about this story? And what is dangerous about it?

Those of us who remember the 1980s recall that the monster threatening the world economy then was Japan, the unstoppable industrial machine that was “flooding the world” with imports. (Yes, that’s right – the same Japan whose economy has been lying comatose for twenty years.) The term of art was “export-led growth.” Now these authors are telling us that massive exports are a reaction to weakness rather than a symptom of growth.

“Unstoppable” Japan suddenly stopped in its tracks. No country has ever ascended an economic throne based on its ability to subsidize the consumption of other nations. Nor has the world ever died of economic indigestion caused by too many imports produced by one country. The story told at the beginning of this article lacks any vestige of economic sense or credibility. It is pure journalistic scare-mongering. Nowhere do the authors employ the basic tools of international economic analysis. Instead, they employ the basic tools of scarifying yellow journalism.

The Oxymoron of “Dumping” 

The authors have set up their readers with a menacing specter described in threatening language. A menace must have victims. So the authors identify the victims. Victims must be saved, so the authors bring the savior into their story. Naturally, the savior is government.

The victims are “steel producers around the globe.” They are victimized by “falling prices.” The authors are well aware that they have a credibility problem here, since their readers are bound to wonder why they should view falling steel prices as a threat to them. As consumers, they see falling prices as a good thing. As prices fall, their real incomes rise. Falling prices allow consumers to buy more goods and services with their money incomes. Businesses buy steel. Falling steel prices allow businesses to buy more steel. So why are falling steel prices a threat?

Well, it turns out that falling steel prices are a threat to “chief executives of leading American steel producers,” who will “testify later this month at a Congressional Steel Caucus hearing.” This is “the prelude to launching at least one anti-dumping complaint with the International Trade Commission.” And what is “dumping?” “‘Dumping,’ or selling abroad below the cost of production to gain market share, is illegal under World Trade Organization law and is punishable with tariffs.”

After this operatic buildup, it turns out that the foreign threat to America spearheaded by a gigantic, menacing foreign power is… low prices. Really low prices. Visualize buying steel at Costco or Wal Mart.

Oh, no! Not that. Head for the bomb shelters! Break out the bug-out bags! Get ready to live off the grid!

The inherent implication of dumping is oxymoronic because the end-in-view behind all economic activity is consumption. A seller who sells for an abnormally low price is enhancing the buyer’s capability to consume, not damaging it. If anybody is “damaged” here, it is the seller, not the buyer. And that begs the question, why would a seller do something so foolish?

More often than not, proponents of the dumping thesis don’t take their case beyond the point of claiming damage to domestic import-competing firms. (The three Journal reporters make no attempt whatsoever to prove that the Chinese are selling below cost; they rely entirely on the allegation to pull their story’s freight.) Proponents rely on the economic ignorance of their audience. They paint an emotive picture of an economic world that functions like a giant Olympics. Each country is like a great big economic team, with its firms being the players. We are supposed to root for “our” firms, just as we root for our athletes in the Summer and Winter Olympics. After all, don’t those menacing firms threaten the jobs of “our” firms? Aren’t those jobs “ours?” Won’t that threaten “our” incomes, too?

This sports motif is way off base. U.S. producers and foreign producers have one thing in common – they both produce goods and services that we can consume, either now or in the future. And that gives them equal economic status as far as we are concerned. The ones “on our team” are the ones that produce the best products for our needs – period.

Wait a minute – what if the producers facing those low prices happen to be the ones employing us? Doesn’t that change the picture?

Yes, it does. In that case, we would be better off if our particular employer faced no foreign competition. But that doesn’t make a case for restricting or preventing foreign competition in general. Even people who lose their jobs owing to foreign competition faced by their employer may still gain more income from the lower prices brought by foreign competition in general than they lose by having to take another job at a lower income.

There’s another pertinent reason for not treating foreign firms as antagonistic to consumer interests. Foreign firms can, and do, locate in America and employ Americans to produce their products here. Years ago, Toyota was viewed as an interloper for daring to compete successfully with the “Big 3” U.S. automakers. Now the majority of Toyota automobiles sold in the U.S. are assembled on America soil in Toyota plants located here.

Predatory Pricing in International Markets

Dumping proponents have a last-ditch argument that they haul out when pressed with the behavioral contradictions stressed above. Sure, those foreign prices may be low now, import-competing producers warn darkly, but just wait until those devious foreigners succeed in driving all their competitors out of business. Then watch those prices zoom sky-high! The foreigners will have us in their monopoly clutches.

That loud groan you heard from the sidelines came from veteran economists, who would no sooner believe this than ask a zookeeper where to find the unicorns. The thesis summarized in the preceding paragraph is known as the “predatory pricing” hypothesis. The behavior was notoriously ascribed to John D. Rockefeller by the muckraking journalist Ida Tarbell. It was famously disproved by the research of economist John McGee. And ever since, economists have stopped taking the concept seriously even in the limited market context of a single country.

But when propounded in the global context of international trade, the whole idea becomes truly laughable. Steel is a worldwide industry because its uses are so varied and numerous. A firm that employed this strategy would have to sacrifice trillions of dollars in order to reduce all its global rivals to insolvency. This would take years. These staggering losses would be accounted in current outflows. They would be weighed against putative gains that would begin sometime in the uncertain future – a fact that would make any lender blanch at the prospect of financing the venture.

As if the concept weren’t already absurd, what makes it completely ridiculous is the fact that even if it succeeded, it would still fail. The assets of all those firms wouldn’t vaporize; they could be bought up cheaply and held against the day when prices rose again. Firms like the American steel company Nucor have demonstrated the possibility of compact and efficient production, so competition would be sure to emerge whenever monopoly became a real prospect.

The likelihood of any commercial steel firm undertaking a global predatory-pricing scheme is nil. At this point, opponents of foreign trade are, in poker parlance, reduced to “a chip and a chair” in the debate. So they go all in on their last hand of cards.

How Do We Defend Against Government-Subsidized Foreign Trade?

Jiming Zou, analyst at Moody’s Investor Service, is the designated spokesman of last resort in the article. “Many Chinese steelmakers are government-owned or closely linked to local governments [and] major state-owned steelmakers continue to have their loans rolled over or refinanced.”

Ordinary commercial firms might cavil at the prospect of predatory pricing, but a government can’t go broke. After all, it can always print money. Or, in the case of the Chinese government, it can always “manipulate the currency” – another charge leveled against the Chinese with tiresome frequency. “The weakening renminbi was also a factor in encouraging exports,” contributed another Chinese analyst quoted by the Journal.

One would think that a government with the awesome powers attributed to China’s wouldn’t have to retrench in all the ways mentioned in the article – reduce spending, lower interest rates, and cut subsidies to state-owned firms including steel producers. Zou is doubtless correct that “given their important role as employers and providers of tax revenue, the mills are unlikely to close or cut production even if running losses,” but that cuts both ways. How can mills “provide tax revenue” if they’re running huge losses indefinitely?

There is no actual evidence that the Chinese government is behaving in the manner alleged; the evidence is all the other way. Indeed, the only actual recipients of long-term government subsidies to firms operating internationally are creatures of government like Airbus and Boeing – firms that produce most or all of their output for purchase by government and are quasi-public in nature, anyway. But that doesn’t silence the protectionist chorus. Government-subsidized foreign competition is their hole card and they’re playing it for all it’s worth.

The ultimate answer to the question “how do we defend against government-subsidized foreign trade?” is: We don’t. There’s no need to. If a foreign government is dead set on subsidizing American consumption, the only thing to do is let them.

If the Chinese government is enabling below-cost production and sale by its firms, it must be doing it with money. There are only three ways it can get money: taxation, borrowing or money creation. Taxation bleeds Chinese consumers directly; money creation does it indirectly via inflation. Borrowing does it, too, when the bill comes due at repayment time. So foreign exports to America subsidized by the foreign government benefit American consumers at the expense of foreign consumers. No government in the world can subsidize the world’s largest consumer nation for long. But the only thing more foolish than doing it is wasting money trying to prevent it.

What Does “Trade Protection” Accomplish?

Textbooks in international economics spell out in meticulous detail – using either carefully drawn diagrams or differential and integral calculus – the adverse effects of tariffs and quotas on consumers. Generally speaking, tariffs have the same effects on consumers as taxes in general – they drive a wedge between the price paid by the consumer and received by the seller, provide revenue to the government and create a “deadweight loss” of value that accrues to nobody. Quotas are, if anything, even more deleterious. (The relative harm depends on circumstances too complex to enumerate.)

This leads to a painfully obvious question: If tariffs hurt consumers in the import-competing country, why in the world do we penalize alleged misbehavior by exporters by imposing tariffs? This is analogous to imposing a fine on a convicted burglar along with a permanent tax on the victimized homeowner.

Viewed in this light, trade protection seems downright crazy. And in purely economic terms, it is. But in terms of political economy, we have left a crucial factor out of our reckoning. What about the import-competing producers? In the Wall Street Journal article, these are the complainants at the bar of the International Trade Commission. They are also the people economists have been observing ever since the days of Adam Smith in the late 18th century, bellied up at the government-subsidy bar.

In Smith’s day, the economic philosophy of Mercantilism reigned supreme. Specie – that is, gold and silver – was considered the repository of real wealth. By sending more goods abroad via export than returned in the form of imports, a nation could produce a net inflow of specie payments – or so the conventional thinking ran. This philosophy made it natural to favor local producers and inconvenience foreigners.

Today, the raison d’etre of the modern state is to take money from people in general and give it to particular blocs to create voting constituencies. This creates a ready-made case for trade protection. So what if it reduces the real wealth of the country – the goods and services available for consumption? It increases electoral prospects of the politicians responsible and appears to increase the real wealth of the beneficiary blocs, which is sufficient to for legislative purposes.

This is corruption, pure and simple. The authors of the Journal article present this corrupt process with a straight face because their aim is to present cheap Chinese steel as a danger to the American people. Thus, their aims dovetail perfectly with the corrupt aims of government.

And this explains the front-page article on the 03/16/2015 Wall Street Journal. It reflects the news value of posing a danger where none exists – that is, the corruption of journalism – combined with the corruption of the political process.

The “Effective Rate of Protection”

No doubt the more temperate readers will object to the harshness of this language. Surely “corruption” is too harsh a word to apply to the actions of legislators. They have a great big government to run. They must try to be fair to everybody. If everybody is not happy with their efforts, that is only to be expected, isn’t it? That doesn’t mean that legislators aren’t trying to be fair, does it?

Consider the economic concept known as the effective rate of protection. It is unknown to the general public, but is appears in every textbook on international economics. It arises from the conjunction of two facts: first, that a majority of goods and services are composed of raw materials, intermediate goods and final-stage (consumer) goods; and second, that governments have an irresistible impulse to levy taxes on goods that travel across international borders.

To keep things starkly simple and promote basic understanding, take the simplest kind of numerical example. Assume the existence of a fictional textile company. It takes a raw material, cotton, and spin, weaves and processes that cotton into a cloth that it sells commercially to its final consumers. This consumer cloth competes with the product of domestic producers as well as with cotton cloth produced by foreign textile producers. We assume that the prevailing world price of each unit of cloth is $1.00. We assume further that domestic producers obtain one textile unit’s worth of cotton for $.50 and add a further $.50 worth of value to the cloth by spinning, weaving and processing it into the cloth.

We have a basic commodity being produced globally by multiple firms, indicated the presence of competitive conditions. But legislators, perhaps possessing some exalted concept of fairness denied to the rabble, decide to impose a tariff on the importation of cotton. Not wishing to appear excessive or injudicious, the solons set this ad valorem tariff at 15%. Given the competitive nature of the industry, this will soon elevate the domestic price of textiles above the world price by the amount of the tariff; e.g., by $.15, to $1.15. Meanwhile, there is no tariff levied on cotton, the raw material. (Perhaps cotton is grown domestically and not imported into the country or, alternatively, perhaps cotton growers lack the political clout enjoyed by textile producers.)

The insight gained from the effective rate of protection begins with the realization that the net income of producers in general derives from the value they add to any raw materials and/or intermediate products they utilize in the production process. Initially, textile producers added $.50 worth of value for every unit of cotton cloth they produced. Imposition of the tariff allows the domestic textile price to rise from $1.00 to $1.15, which causes textile producers’ value added to rise from $.50 to $.65.

Legislators judiciously and benevolently decided that the proper amount of “protection” to give domestic textile producers from foreign competition was 15%. They announced this finding amid fanfare and solemnity. But it is wrong. The tariff has the explicit purpose of “protecting” the domestic industry, of giving it leeway it would not otherwise get under the supposedly harsh and unrelenting regime of global competition. But this tariff does not give domestic producers 15% worth of protection. $15 divided by $.50 – that is, the increase in value added divided by the original value added – is .30, or 30%. The effective rate of protection is double the size of the “nominal” (statutory) level of protection. In general, think of the statutory tariff rate as the surface appearance and the effective rate as the underlying truth.

Like oh-so-many economic principles, the effective rate of protection is a relatively simple concept that can be illustrated with simple examples, but that rapidly becomes complex in reality. Two complications need mention. When tariffs are also levied on raw materials and/or intermediate products, this affects the relationship between the effective and nominal rate of protection. The rule of thumb is that higher tariff rates on raw materials and intermediate goods relative to tariffs on final goods tend to lower effective rates of protection on the final goods – and vice-versa.

The other complication is the percentage of total value added comprised by the raw materials and intermediate goods prior to, and subsequent to, imposition of the tariff. This is a particularly knotty problem because tariffs affect prices faced by buyers, which in turn affect purchases, which in turn can change that percentage. When tariffs on final products exceed those on raw materials and intermediate goods – and this has usually been the case in American history – an increase in this percentage will increase the effective rate.

But for our immediate purposes, it is sufficient to realize that appearance does not equal reality where tariff rates are concerned. And this is the smoking gun in our indictment of the motives of legislators who promote tariffs and restrictive foreign-trade legislation.

 

Corrupt Legislators and Self-Interested Reporting are the Real Danger to America

In the U.S., the Commercial Code includes thousands of tariffs of widely varying sizes. These not only allow legislators to pose as saviors of numerous business constituent classes. They also allow them to lie about the degree of protection being provided, the real locus of the benefits and the reasons behind them.

Legislators claim that the size of tariff protection being provided is modest, both in absolute and relative terms. This is a lie. Effective rates of protection are higher than they appear for the reasons explained above. They unceasingly claim that foreign competitors behave “unfairly.” This is also a lie, because there is no objective standard by which to judge fairness in this context – there is only the economic standard of efficiency. Legislators deliberately create bogus standards of fairness to give themselves the excuse to provide benefits to constituent blocs – benefits that take money from the rest of us. International trade bodies are created to further the ends of domestic governments in this ongoing deception.

Readers should ask themselves how many times they have read the term “effective rate of protection” in The Wall Street Journal, The Financial Times of London, Barron’s, Forbes or any of the major financial publications. That is an index of the honesty and reputability of financial journalism today. The term was nowhere to be found in the Journal piece of 03/16/2015.

Instead, the three Journal authors busied themselves flacking for a few American steel companies. They showed bar graphs of increasing Chinese steel production and steel exports. They criticized the Chinese because the country’s steel production has “yet to slow in lockstep” with growth in demand for steel. They quoted self-styled experts on China’s supposed “problem [with] hold[ing] down exports” – without every explaining what rule or standard or economic principle of logic would require a nation to withhold exports from willing buyers. They cited year-over-year increases in exports between January, 2013, 2014 and 2015 as evidence of China’s guilt, along with the fact that the Chinese were on pace to export more steel than any other country “in this century.”

The reporters quoted the whining of a U.S. steel vice-president that demonstrating damage from Chinese exports is just “too difficult” to satisfy trade commissioners. Not content with this, they threw in complaints by an Indian steel executive and South Koreans as well. They neglect to tell their readers that Chinese, Indian and South Korean steels tend to be lower grades – a datum that helps to explain their lower prices. U.S. and Japanese steels tend to be higher grade, and that helps to explain why companies like Nucor have been able to keep prices and profit margins high for years. The authors cite one layoff at U.S. steel but forget to cite the recent article in their own Wall Street Journal lauding the history of Nucor, which has never laid off an employee despite the pressure of Chinese competition.

That same article quoted complaints by steel buyers in this country about the “competitive disadvantage” imposed by the higher-priced U.S. steel. Why are the complaints about cheap Chinese exports front-page news while the complaints about high-priced American steel buried in back pages – and not even mentioned by a subsequent banner article boasting input by no fewer than three Journal reporters? Why did the reporters forget to cite the benefits accruing to American steel users from low prices for steel imports? Don’t these reporters read their own newspaper? Or do they report only what comports with their own agenda?

DRI-183 for week of 3-1-15: George Orwell, Call Your Office – The FCC Curtails Internet Freedom In Order to Save It

An Access Advertising EconBrief:

George Orwell, Call Your Office – The FCC Curtails Internet Freedom In Order to Save It

February 26, 2015 is a date that will live in regulatory infamy. That assertion is subject to revision by the courts, as is nearly everything undertaken these days by the Obama administration. As this is written, the Supreme Court hears yet another challenge to “ObamaCare,” the Affordable Care Act. President Obama’s initiative to achieve a single-payer system of national health care in the U.S. is rife with Orwellian irony, since it cannot help but make health care unaffordable for everybody by further removing the consumer of health care from every exposure to the price of health care. Similarly, the latest administration initiative is the February 26 approval by the Federal Communications Commission (FCC) of the so-called “Net Neutrality” doctrine in regulatory form. Commission Chairman Tom Wheeler’s summary of his regulatory proposal – consisting of 332 pages that were withheld from the public – has been widely characterized as a proposal to “regulate the Internet like a public utility.”

This episode is riven with a totalitarian irony that only George Orwell could fully savor. The FCC is ostensibly an independent regulatory body, free of political control. In fact, Chairman Wheeler long resisted the “net neutrality” doctrine (hereinafter, shortened to “NN” for convenience). The FCC’s decision was a response to pressure from President Obama, which made a mockery of the agency’s independence. The alleged necessity for NN arises from the “local monopoly” over “high-speed” broadband exerted by Internet service providers (again, hereinafter abbreviated as “ISPs”) – but a “public utility” was, and is, by definition a regulated monopoly. Since the alleged local monopoly held by ISPs is itself fictitious, the FCC is in fact proposing to replace competition with monopoly.

To be sure, the particulars of Chairman Wheeler’s proposal are still open to conjecture. And the enterprise is wildly illogical on its face. The idea of “regulating the Internet like a public utility” treats those two things as equivalent entities. A public utility is a business firm. But the Internet is not a single business firm; indeed, it is not a single entity at all in the concrete sense. In the business sense, “the Internet” is shorthand for an infinite number of existing and potential business firms serving the world’s consumers in countless ways. The clause “regulate the Internet like a public utility” is quite literally meaningless – laughably indefinite, overweening in its hubris, frightening in its totalitarian implications.

It falls to an economist, former FCC Chief Economist Thomas Hazlett of Clemson University, to sculpt this philosophy into its practical form. He defines NN as “a set of rules… regulating the business model of your local ISP.” In short, it is a political proposal that uses economic language to prettify and conceal its real intentions. NN websites are emblazoned with rhetoric about “protecting the Open Internet” – but the Internet has thrived on openness for over 20 years under the benign neglect of government regulators. This proposal would end that era.

There is no way on God’s green earth to equate a regulated Internet with an open Internet; the very word “regulated” is the antithesis of “open.” NN proponents paint scary scenarios about ISPs “blocking or interfering with traffic on the Internet,” but their language is always conditional and hypothetical. They are posing scenarios that might happen in the future, not ones that threaten us today. Why? Because competition and innovation protected consumers up to now and continue to do so. NN will make its proponents’ scary predictions more likely, not less, because it will restrict competition. That is what regulation does in general; that is what public-utility regulation specifically does. For over a century, public-utility regulation has installed a single firm as a regulated monopoly in a particular market and has forcefully suppressed all attempts to compete with that firm.

Of course, that is not what President Obama, Chairman Wheeler and NN proponents want us to envision when we hear the words “regulate the Internet like a public utility.” They want us to envision a lovely, healthy flock of sheep grazing peacefully in a beautiful meadow, supervised by a benevolent, powerful Shepherd with a herd of well-trained, affectionate shepherd dogs at his command. Soothing music is piped down from heaven and love and tranquility reign. At the far edges of the meadow, there is a forest. Hungry wolves dwell within, eyeing the sheep covetously. But they dare not approach, for they fear the power of the Shepherd and his dogs.

In other words, the Obama administration is trying to manipulate the emotions of the electorate by creating an imaginary vision of public-utility regulation. The reality of public-utility regulation was, and is, entirely different.

The Natural-Monopoly Theory of Public-Utility Regulation

The history of public-utility regulation is almost, but not quite, co-synchronous with that of government regulation of business in the United States. Regulation began at the state level with Munn vs. Illinois, which paved the way for state government of the grain business in the 1870s. The Interstate Commerce Commission’s inaugural voyage with railroad regulation followed in the late 1880s. With the commercial introduction of electric lighting and the telephone came business firms tailored to those ends. And in their wake came the theory of natural monopoly.

Both electric power and telephones came to be known as “natural monopoly” industries; that is, industries in which both economic efficiency and commercial viability chose one single firm to serve the entire market. This was the outgrowth of economies of scale in production, owing to decreasing long-run average cost of production. This decidedly unusual state of affairs is a technological anomaly. Engineers recognize it in conjunction with the “two-thirds rule.” There are certain cases in which cost increases as the two-thirds power of output, which implies that cost decreases steadily as output rises. (The thru-put of pipes and cables and the capacity of cargo holds are examples.) In turn, this implies that the firm that grows the fastest will undersell all others while still covering all its costs. The further implication is that consumers will receive the most output at the lowest price if one monopoly firm serves everybody – if, and only if, the firm’s price can be constrained equal to its long-run average cost at the rate of output necessary to meet market demand. An unconstrained monopoly would produce less than this optimal rate of output and charge a higher price, in order to maximize its profit. But the theoretical outcome under regulated monopoly equates price with long-run average cost, which provides the utility with a rate of return equal to what it could get in the best alternative use for its financial capital, given its business risk.

In the U.S. and Canada, this regulated outcome is sought by a public-utility commission via the medium of periodic hearings staged by the public-utility regulatory commission (PUC for short). The utility is privately owned by shareholders. In Europe, utilities are not privately owned. Instead, their prices are (in principle) set equal to long-run marginal cost, which is below the level of average cost and thus constitutes a loss in accounting terms. Taxpayers subsidize this loss – these subsidies are the alternative to the profits earned by regulated public-utility firms in the U.S. and Canada.

These regulatory schemes represent the epitome of what the Nobel laureate Ronald Coase called “blackboard economics” – economists micro-managing reality as if they possessed all the information and control over reality that they do when drawing diagrams on a classroom blackboard. In practice, things did not work out as neatly as the foregoing summary would lead us to believe. Not even remotely close, in fact.

The Myriad Slips Twixt Theoretical Cup and Regulatory Lip

What went wrong with this theoretical set-up, seemingly so pat when viewed in a textbook or on a classroom blackboard? Just about everything, to some degree or other. Today, we assume that the institution of regulated monopoly came in response to market monopolies achieved and abuses perpetrated by electric and telephone companies. What mostly happened, though, was different. There were multiple providers of electricity and telephone service in the early days. In exchange for submitting to rate-of-return regulation, though, one firm was extended a grant of monopoly and other firms were excluded. Only in very rare cases did competition exist for local electric service – and curiously, this rate competition actually produced lower electric rates than did public-utility regulation.

This result was not the anomaly it seemed, since the supposed economies of scale were present only in the distribution of electric power, not in power generation. So the cost superiority of a single firm producing for the whole market turned out to be not the slam-dunk that was advertised. That was just one of many cracks in the façade of public-utility regulation. Over the course of the 20th century, the evolution of public-utility regulation in telecommunications proved to be paradigmatic for the failures and inherent shortcomings of the form.

Throughout the country, the Bell system were handed a monopoly on the provision of local service. Its local service companies – the analogues to today’s ISPs – gradually acquired reputations as the heaviest political hitters in state-government politics. The high rates paid by consumers bought lobbyists and legislators by the gross, and they obediently safeguarded the monopoly franchise and kept the public-utility commissions (PUCs) staffed with tame members. That money also paid the bill for a steady diet of publicity designed to mislead the public about the essence of public-utility regulation.

We were assured by the press that the PUC was a vigilant watchdog whose noble motives kept the greedy utility executives from turning the rate screws on a helpless public. At each rate hearing, self-styled consumer advocacy groups paraded their compassion for consumers by demanding low rates for the poor and high rates on business – as if it were really possible for some non-human entity called “business” to pay rates in the true sense, any more than they could pay taxes. PUCs made a show of ostentatiously requiring the utility to enumerate its costs and pretending to laboriously calculate “just and reasonable” rates – as if a Commission possessed juridical powers denied to the world’s greatest philosophers and moralists.

Behind the scenes, after the press had filed their poker-faced stories on the latest hearings, increasingly jaded and cynical reporters, editors and industry consultants rolled their eyes and snorted at the absurdity of it all. Utilities quickly learned that they wouldn’t be allowed to earn big “profits,” because this would be cosmetically bad for the PUC, the consumer advocates, the politicians and just about everybody involved in this process. So executives, middle-level managers and employees figured out that they had to make their money differently than they would if working for an ordinary business in the private sector. Instead of working efficiently and productively and striving to maximize profit, they would strive to maximize cost instead. Why? Because they could make money from higher costs in the form of higher salaries, higher wages, larger staffs and bigger budgets. What about the shareholders, who would ordinarily be shafted by this sort of behavior? Shareholders couldn’t lose because the PUC was committed to giving them a rate of return sufficient to attract financial capital to the industry. (And the shareholders couldn’t gain from extra diligence and work effort put forward by the company because of the limitation on profits.) That is, the Commission would simply ratchet up rates commensurate with any increase in costs – accompanied by whatever throat-clearing, phony displays of concern for the poor and cost-shifting shell games were necessary to make the numbers work. In the final analysis, the name of the game was inefficiency and consumers always paid for it – because there was nobody else who could pay.

So much for the vaunted institution of public-utility regulation in the public interest. Over fifty years ago, a famous left-wing economist named Gardiner Means proposed subjecting every corporation in the U.S. to rate-of-return regulation by the federal government. This held the record for most preposterous policy program advanced by a mainstream commentator – until Thomas Wheeler announced that henceforth the Internet would be regulated as if it were a public utility. Now every American will get a taste of life as Ivan Denisovich, consigned to the Gulag Archipelago of regulatory bureaucracy.

Of particular significance to us in today’s climate is the effect of this regime on innovation. Outside of totalitarian economies such as the Soviet Union and Communist China, public-utility regulation is the most stultifying climate for innovation ever devised by man. The idea behind innovation is to find ways to produce more goods using the same amount of inputs or (equivalently) the same amount of goods using fewer inputs. Doing this lowers costs – which increases profits. But why do to the trouble if you can’t enjoy the increase in profits? Of course, utilities were willing to spend money on research, provided they could get it in the rate base and earn a rate of return on the investment. But they had no incentive to actually implement any cost-saving innovations. The Bell System was legendary for its unwillingness to lower its costs; the economic literature is replete with jaw-dropping examples of local Bell companies lagging years and even decades behind the private sector in technology adoption – even spurning advances developed in Bell’s own research labs!

Any reader who suspects this writer of exaggeration is invited to peruse the literature of industrial organization and regulation. One nagging question should be dealt with forthwith. If the demerits of public-utility regulation were well recognized by insiders, how were they so well concealed from the public? The answer is not mysterious. All of those insiders had a vested interest in not blowing the whistle on the process because they were making money from ongoing public-utility regulation. Commission employees, consultants, expert witnesses, public-interest lawyers and consumer advocates all testified at rate hearings or helped prepare testimony or research it. They either worked full-time or traveled the country as contractors earning lucrative hourly pay. If any one of them was crazy enough to launch an expose of the public-utility scam, he or she would be blackballed from the business while accomplishing nothing – the institutional inertia in favor of the system was so enormous that it would have taken mass revolt to effect change. So they just shrugged, took the money and grew more cynical by the year.

In retrospect, it seems miraculous that anything did change. In the 1960s, local Bell companies were undercharging for local service to consumers and compensating by soaking business and long-distance customers with high prices. The high long-distance rates eventually attracted the interest of would-be competitors. One government regulator grew so fed up with the inefficiency of the Bell system that he granted the competitive petition of a small company called MCI, which sought to compete only in the area of long-distance telecommunications. MCI was soon joined by other firms. The door to competition had been cracked slightly ajar.

In the 1980s, it was kicked wide open. A federal antitrust lawsuit against AT&T led to the breakup of the firm. At the time, the public was dubious about the idea that competition was possible in telecommunications. The 1990s soon showed that regulators were the only ones standing between the American public and a revolution unlike anything we had seen in a century. After vainly trying to protect the local Bells against competition, regulators finally succumbed to the inevitable – or rather, they were overrun by the competitive hordes. When the public got used to cell phones and the Internet, they ditched good old Ma Bell and land-line phones.

This, then, is public-utility regulation. The only reason we have smart phones and mobile Internet access today is that public-utility regulation in telecommunications was overrun by competition despite regulatory opposition in the 1990s. But public-utility regulation is the wonderful fate to which Barack Obama, Thomas Wheeler and the FCC propose to consign the Internet. What is the justification for their verdict?

The Case for Net Neutrality – Debunked

As we have seen, public-utility regulation was based on a premise that certain industries were “natural monopolies.” But nobody has suggested that the Internet is a natural monopoly – which makes sense, since it isn’t an industry. Nobody has suggested that all or even some of the industries that utilize the Internet are natural monopolies – which makes sense, since they aren’t. So why in God’s name should we subject them to public-utility regulation – especially since public-utility regulation didn’t even work well in the industries for which it was ideally suited? We shouldn’t.

The phrase “net neutrality” is designed to achieve an emotional effect through alliteration and a carefully calculated play on the word “neutral.” In this case, the word is intended to appeal to egalitarian sympathies among hearers. It’s only fair, we are urged to think, that ISPs, the “gatekeepers” of the Internet, are scrupulously fair or “neutral” in letting everybody in on the same terms. And, as with so many other issues in economics, the case for “fairness” becomes just so much sludge upon closer examination.

The use of the term “gatekeepers” suggests that God handed to Moses on Mount Olympus a stone tablet for the operation of the Internet, on which ISPs were assigned the role of “gatekeepers.” Even as hyperbolic metaphor, this bears no relation to reality. Today, cable companies are ISPs. But they began life as monopoly-killers. In the early 1960s, Americans chose between three monopoly VHF-TV networks, broadcast by ABC, NBC and CBS. Gradually, local UHF stations started to season the diet of content-starved viewers. When cable-TV came along, it was like manna from heaven to a public fed up with commercials and ravenous for sports and movies. But government regulators didn’t allow cable-TV to compete with VHF and UHF in the top 100 media markets of the U.S. for over two decades. As usual, regulators were zealously protecting government monopoly, restricting competition and harming consumers.

Eventually, cable companies succeeded in tunneling their way into most local markets. They did it by bribing local government literally and figuratively – the latter by splitting their profits via investment in pet political projects of local politicians as part of their contracts. In return, they were guaranteed various degrees of exclusivity. But this “monopoly” didn’t last because they eventually faced competition from telecommunication firms who wanted to get into their business and whose business the cable companies wanted to invade. And today, the old structural definitions of monopoly simply don’t apply to the interindustry forms of competition that prevail.

Take the Kansas City market. Originally, Time Warner had a monopoly franchise. But eventually a new cable company called Everest invaded the metro area across the state line in Johnson County, KS. Overland Park is contiguous with Kansas City, MO, and consumers were anxious to escape the toils of Time Warner. Eventually, Everest prevailed upon KC, MO to gain entry to the Missouri side. Now even the cable-TV market was competitive. Then Google selected Kansas City, KS as the venue for its new high-speed service. Soon KC, MO was included in that package, too – now there were three local ISPs! (Everest has morphed into two successive incarnations, one of which still serves the area.)

Although this is not typical, it does not exhaust the competitive alternatives. This is only the picture for fixed service. Americans are now turning to mobile forms of access to the Internet, such as smart phones. Smart watches are on the horizon. For mobile access, the ISP is a wireless company like AT&T, Verizon, Sprint or T-Mobile.

The NN websites stridently maintain that “most Americans have only a single ISP.” This is nonsense; a charitable interpretation would be that most of us have only a single cable-TV provider in our local market. But there is no necessary one-to-one correlation between “cable-TV provider” and “ISP.” Besides, the state of affairs today is ephemeral – different from what is was a few years ago and from what it will be a few years from now. It is only under public-utility regulation that technology gets stuck in one place because under public-utility regulation there is no incentive to innovate.

More specifically, the FCC’s own data suggest that 80% of Americans have two or more ISPs offering 10Mbps downstream speeds. 96% have two or more ISPs offering 6Mbps downstream and 1.5 upstream speeds. (Until quite recently, the FCC’s own criterion for “high-speed” Internet was 4Mbps or more.) This simply does not comport with any reasonable structural concept of monopoly.

The current flap over “blocking and interfering with traffic on the Internet” is the residue of disputes between Netflix and ISPs over charges for transmission of the former’s streaming services. In general, there is movement toward higher charges for data transmission than for voice transmission. But the huge volumes of traffic generated by Netflix cause congestion, and the free-market method for handling congestion is a higher price, or the functional equivalent. That is what economists have recommended for dealing with road congestion during rush hours and congested demand for air-conditioning and heating services at peak times of day and during peak seasons. Redirecting demand to the off-peak is not a monopoly response; it is an efficient market response. Competitive bar and restaurant owners do it with their pricing methods; competitive movie theater owners also do it (or used to).

Similar logic applies to other forms of hypothetically objectionable behavior by ISPs. The prioritization of traffic, creation of “fast” and “slow” lanes, blocking of content – these and other behaviors are neither inherently good nor bad. They are subject to the constraints of competition. If they are beneficial on net balance, they will be vindicated by the market. That is why we have markets. If a government had to vet every action by every business for moral worthiness in advance, it would paralyze life as we know it. The only sensible course is to allow free markets and competition to police the activities of competitors.

Just as there is nothing wrong or untoward with price differentials based on usage, there is nothing virtuous about government-enforced pricing equality. Forcing unequals to be treated equally is not meritorious. NN proponents insist that the public has to be “protected” from that kind of treatment. But this is exactly what PUCs did for decades when they subsidized residential consumers inefficiently by soaking business and long-distance users with higher rates. Back then, the regulatory mantra wasn’t “net neutrality,” it was “universal service.” Ironically, regulators never succeeded in achieving rates of household telephone subscription that exceeded the rate of household television service. Consumers actually needed – but didn’t get – protection from the public-utility monopoly imposed upon them. Today, consumers don’t need protection because there is no monopoly, nor is there any prospect of one absent regulatory intervention. The only remaining vestige of monopoly is that remaining from the grants of local cable-TV monopoly given by municipal governments. Compensating for past mistakes by local government is no excuse for making a bigger mistake by granting monopoly power to FCC regulators.

Forbearance? 

The late, great economist Frank Knight once remarked that he had heard do-gooders utter the equivalent words to “I want power to do good” so many times for so long that he automatically filtered out the last three words, leaving only “I want power.” Federal-government regulators want the maximum amount of power with the minimum number of restrictions, leaving them the maximum amount of flexibility in the exercise of their power. To get that, they have learned to write excuses into their mandates. In the case of NN and Internet regulation, the operative excuse is “forbearance.”

Forbearance is the writing on the hand with which they will wave away all the objections raised in this essay. The word appears in the original Title II regulations. It means that regulators aren’t required to enforce the regulations if they don’t want to; they can “forebear.” “Hey, don’t worry – be happy. We won’t do the bad stuff, just the good stuff – you know, the ‘neutrality’ stuff, the ‘equality’ stuff.” Chairman Wheeler is encouraging NN proponents to fill the empty vessel of Internet regulation with their own individual wish-fulfillment fantasies of what they dream a “public-utility” should be, not what the ugly historical reality tells us public-utility regulation actually was. For example, he has implied that forbearance will cut out things like rate-of-return regulation.

This just begs the questions raised by the issue of “regulating the Internet like a public utility.” The very elements that Wheeler proposes to forbear constitute part and parcel of public-utility regulation as we have known it. If these are forborne, we have no basis for knowing what to expect from the concept of Internet public-utility regulation at all. If they are not, after all, forborne – then we are back to square one, with the utterly dismal prospect of replaying 20th-century public-utility regulation in all its cynical inefficiency.

Forbearance is a good idea, all right – so good that we should apply it to the whole concept of Internet regulation by the federal government. We should forbear completely.

DRI-162 for week of 2-1-15: It Happens Every Season

An Access Advertising EconBrief:

It Happens Every Season

The Super Bowl has come and gone. And with it have come stories on the economic benefits accruing to the host city – or, in this case, cities. The refrain is always the same. The opportunity to host the Super Bowl is the municipal equivalent of winning the Powerball lottery. Thousands – no, hundreds of thousands of people – descend on the host city. They focus the world’s attention upon it. They “put in on the map.” They spend money, and that money rockets and ricochets and rebounds throughout the local economy with ballistic force, conferring benefits left, right and center. We cannot help but wonder – why don’t we replicate this benefit process by bringing people and businesses to town? Why wait in vain on a Super Bowl lottery when we can instead run our own economic benefit lottery by offering businesses incentives to relocate, thereby redistributing economic benefits in our favor?

It happens every winter. In fact, publicity about economic development incentives (EDIs) is always in season, for they operate year-round. Nowadays almost every state in the union has a government bureau with “economic development” on its nameplate and a toolkit bulging with subsidies and credits.

For years, the news media has mindlessly repeated this stylized picture of EDIs, as if they were all repeating the same talking points. Both the logic of economics and empirical reality vary starkly from this portrait.

EDIs In a Nutshell

The term “EDIs” is shorthand for a variety of devices intended to make it more attractive for particular businesses to relocate to and/or operate in a particular geographic area. The devices involve either taxes or subsidies. Sometimes a business will receive an outright grant of money to relocate, much as an individual gets a relocation bonus from his or her company. Sometimes a business will receive a tax credit as an inducement to relocate. The tax credit may be of specified duration or indefinite. Sometimes the business may receive tax abatement – property tax abatements are especially favored. Again, this may be time-limited or indefinite. Sometimes the tax or subsidy is implicit rather than explicit. Sometimes businesses will even receive production subsidies in excise form; that is, a per-unit subsidy on output produced.

Various forms of implicit or in-kind benefit are also offered. These include grants of land for production facilities and exemption from obligations such as payment for municipal services.

These do not exhaust the EDI possibilities but the list is representative and suggestive.

A Short, Sour History of EDIs

Proponents of EDIs indignantly reject the charge that their ideas are new. On the contrary, government favors to business trace back to the early years of the republic, they insist.

It is certainly true that the early decades of the 19th century saw a boom – today, we would call it a “bubble” – in the building of canals, primarily as transportation media. The Erie Canal was the most famous of these. Although the canals were privately owned, they were heavily subsidized and supported by government. Are we surprised, then, that the canal boom went bust, sinking most of its investors like sash weights? Railroads are traditionally given credit for spearheading U.S. economic development in the 19th century, and the various special favors they won from state and local governments are legendary. They include subsidies and extravagant rights of way on either side of their trackage. But economist Robert Fogel won a Nobel Prize for his downward revision of the importance of railroads to the economic growth of 19th-century America, so there is less there than meets the mainstream historical eye.

The modern emphasis on EDIs can be traced back to the state industrial finance boards of the 1950s. These became more active in the late 1960s and 70s when the national economy went stagnant with simultaneous inflation and recession. Like European national governments today, state and local governments were trying to steal businesses from each other. They lacked central banks and the power to print money, so they couldn’t devalue their currencies as European nations are now doing serially. Instead, they used selective economic benefits as their tools for redistributing businesses in their favor. And, like Europe today, they found that these methods only work as intended when employed by the few. When everybody does it simultaneously, they cancel each other out. One state steals Business A from another, but loses Business B. How do we know whether that state has gained or lost on net balance? We don’t, but in the aggregate nobody wins because businesses are simply being reallocated – and not for the better. Of course, we haven’t yet stopped to consider whether the state even gained from wooing Business A in the first place.

We can look back on many celebrated startups and relocations that were midwived by EDIs. In Tennessee, Nissan got EDI subsidies for relocating to the state in 1980. Later, GM built its famous Saturn plant there. In both cases, the big selling point was the large number of jobs ostensibly created by the project. We can get some idea of the escalation in the EDI bidding sweepstakes by comparing the price-tag per job over time. The Nissan subsidies cost roughly $11,000 per job created. At this price, it is hard to envision an economic bonanza for the host community, but compare that to the $168,000 per job created that went to Mercedes Benz for relocating to Alabama in 1993. In 1978, Volkswagen promised 20,000 jobs for the $70 million it got for moving to Pennsylvania, but ended up delivering only about 6,000 jobs before closing the plant within a decade.

There is every reason to believe that these results were the rule, not the exception. Economists have identified the phenomenon known as the “winner’s curse,” in which winning bidders often find that they had to bid such a high price to win that their benefits were eaten up. Economists have long objected to the government practice of setting quotas on imported goods because the quota harms domestic consumers more than it benefits domestic producers. Moreover, governments customarily give import licenses to politically favored businesses. Economists plead: Why not open up the licenses to competitive bid? That would force would-be beneficiaries of the artificial shortage created by the quota to eat up their monopoly profits in the price they pay for the import license. Then taxpayers would benefit from the revenue, making up for what they lose in consumption of the import good. This same principle prevents cities from benefitting when they “bid” against other cities to lure firms by offering them subsidies and tax credits – they have to offer the firm such lucrative benefits to win the competition against numerous other cities that any benefits provided by the relocating business are eaten up by the subsidy price the city pays.

The Economics of Business Location

The general public probably envisions an economic textbook with a chapter on “economic development” and tips on how to lure businesses and which types of business are the most beneficial, as well as tables of “multiplier” benefits for each one.

Not! The theory of economic development is silent on this subject. The only applicable economic logic is imported from the theory of international trade. The case of import quotas provided on example. The specter of European nations futilely trying to outdo each other in trashing the value of their own currencies is another; international economists use the acerbic term “beggar thy neighbor” to characterize the motivation behind this strategy. It applies equally to states and cities that poach on businesses in neighboring jurisdictions, trying to lure them across state or municipal boundaries where they can pay local taxes and provide prestigious photo opportunities for politicians.

What about the Keynesian theory of the “multiplier,” in which government spending has a multiple effect on income and employment? Even if it were true – and all major Keynesian criticisms of neoclassical theory have been overturned – it would apply only under conditions of widespread unemployment. It apply only to national governments that can control policies for the entire nation and have the power to control and alter the supply of money and credit and rates of interest. Thus, the principle would be completely inapplicable to state and local governments anyway.

Economists believe that there is an economically efficient location for a business. Typically, this will be the place where it can obtain its inputs at lowest cost. Alternatively, it might be where it can ship its output to consumers the cheapest. If EDIs cause a business to locate away from this best location by falsely offsetting the natural advantages of another location, they are harming the consumers of the goods and services produced by the businesses. Why? The business is incurring higher costs by operating in the wrong location, and these higher costs must be compensated by a higher price paid by consumers than would otherwise be true. That higher price combines with the subsidies paid by taxpayers in the host community to constitute the price paid for violating the dictates of economic efficiency.

Why do economists obsess over efficiency, anyway? The study of economics accepts as a fact that human beings strive for happiness. In order to attain our goals, we must make the best use of our limited resources. That requires optimal consumer choice and cost minimization by producers. When government – which is a shorthand term for the actions of politicians, bureaucrats and lower-level employees acting in their own interests – muck up the signaling function of market prices, this distorts the choices made by consumers and producers. Efficiency is reduced. And this effect is far from trivial. A previous EconBrief discussed an estimate that federal-government regulations since 1949 have reduced the rate of economic growth in the U.S. by a factor of three, implying that average incomes would be roughly $125,000 higher today in their absence.

EDIs are a separate issue from regulation. They are more recent in origin but growing in importance. In 1995, the Minneapolis Federal Reserve published a study by economists Melvin Burstein and Arthur Rolnick, entitled “Congress Should End the Economic War Between the States.” At about the same time, the United Nations published its own study dealing with a similar phenomenon at the international level.

Borrowing once again from the theory of international trade, these studies view production in light of the principle of comparative advantage. Countries (or states, or regions, or cities, or neighborhoods, or individual persons) specialize in producing goods or services that they produce at lower opportunity cost than competitors. Freely fluctuating market prices will reflect these opportunity costs, which represent the monetary value of alternative production foregone in the creation of the comparative-advantage good or service. Free trade between countries (or states, regions, cities, neighborhoods or persons) allows everybody to enjoy the consumption gains of this optimal pattern of production.

Burstein, Rolnick, the U.N., et al felt that politicians should not be allowed to muck up free markets for their own benefit and said so. That debate has continued ever since in policy circles.

The Umpires Strike Back: EDI Proponents Respond 

Responses of EDI proponents have taken two forms. The first is anecdotal. They cite cases of particular successful EDI regimes or projects. The cited case is usually a city like Indianapolis, IN, which enjoyed a run of success in luring businesses and a concurrent spurt of economic growth. A less typical case is Kansas City, KS, which languished for several decades in prolonged decay with a deserted, crumbling downtown area and crime-ridden government housing projects and saw its tax base steadily disintegrate. The city subsidized a NASCAR-operated racing facility on the western edge of its county, miles away from its downtown base. It also subsidized a gleaming shopping and entertainment district slightly inward of the racetrack. Both NASCAR and the shopping district have benefitted from these moves, and politicians have claimed credit for revitalizing the city by their efforts. A recent Wall Street Journal column described the policy has having revamped “the city and its reputation.”

The second argument consists of a few studies that claim to find a statistical link between the level of spending on EDIs and the rate of job growth in states. Specifically, these studies report “statistically significant” relationships between those two variables. This link is cited as justification for EDIs.

Both these arguments are extremely weak, not to say specious. It is widely recognized today that most investors are foolish to actively manage their own stock portfolios; e.g., to pick stocks in order to “beat the market” by earning a rate of return superior to the average rate available on (say) an index fund such as the S&P 500. Does that mean that it is impossible to beat the market? No; if millions of investors try, a few will succeed due to random chance or luck. Another few will succeed due to expertise denied to the masses.

Analogous reasoning applies to the anecdotal argument made by EDI proponents. A few cities are always enjoying economic growth for reasons having nothing to do with EDIs – demographic or geographic reasons, for example. With large numbers of cities “competing” via EDIs, a few will succeed due to random chance. But this does not make, or even bolster, the case for EDIs. Indeed, the use of the term “competition” in this context is really false, because cities do not compete with cities – only concrete entities such as businesses or individuals can compete with each other. It is really the politicians that are competing with each other. And this form of competition, quite unlike the beneficial form of competition in free markets, is inherently harmful.

This sophisticated rebuttal is overly generous to the anecdotal arguments for EDIs. Even if we assume that the EDIs produce a successful project – that is, if we assume that Saturn succeeds at its Tennessee plant or NASCAR thrives in Kansas City, KS – it by no means follows that one company’s gains translate into areawide gains in real income. A study by the late Richard Nadler found no gains at all in local Gross Domestic Product for Wyandotte County, in which Kansas City, Kansas resides, years after NASCAR had arrived. The logic behind this result, reviewed later, is straightforward.

The studies claiming to support EDIs lean heavily on the prestige of statistical significance. Alas, this concept is both misunderstood and misapplied even by policy experts. Its meaning is binary rather than quantitative. When a relationship is found “statistically significant,” that means that it is unlikely to be completely random or chance but it says nothing about the quantitative strength or importance of the relationship. This caveat is especially germane when discussing EDIs, because all the other evidence tells us that EDIs are trivial in their substantive effect on business location decisions.

For decades, intensive surveys have indicated that business executives select the optimal location for their business – then gladly take whatever EDIs are offered. In other words, the EDI is usually irrelevant to the actual location decision. But executives seal their lips when it comes to admitting this fact openly, because their interests lie in fanning the flames of the Economic War Between the States. That war keeps EDIs in place and subsidizes their moves and investments.

Thus, a statistical correlation between EDIs and job growth is not a surprise. But no case has been made that EDIs are the prime causal mover in differential job growth or economic growth among states, regions or cities.

Perhaps the best practical index of the demerits of EDIs would be the economic decline of big-spending blue states in America. These states have been high-tax, high-spending states that heavily utilized EDIs to reward politically favored businesses. This tactic may have improved the fortunes of those clients, but it has certainly not raised the living standards of the populations of those states.

If Not EDIs, What? 

It is reasonable to ask: If EDIs do not govern the wealth of states or cities, what does? Rather than offer selective inducements to businesses, governments would do better to offer across-the-board inducements via lower tax rates to businesses and consumers. Studies have consistently linked higher rates of economic growth with lower taxes on both businesses and individuals throughout the U.S.

Superficially, this strikes some people as counterintuitive. The word “selective” seems attractive; it suggests picking and choosing the best and weeding out the worst. Why isn’t this better than blindly lower taxes on everybody?

In fact, it is much worse. Government bureaucrats or consultants are not experts in choosing which businesses will succeed or fail. Actually, there are very few “experts” at doing that; the best ones attain multi-millionaire or billionaire status and would never waste their time working for government. Governments fail miserably at that job. Better to allow the experts at stock-picking to pick stocks and relegate government to doing the very, very few things that it can and should do.

States and municipalities typically operate with budget constraints. They cannot create money as national governments can and are very limited in their ability to borrow money. So when they selectively give money to a few businesses with subsidies or tax credits, the remaining businesses or individuals have to pay for that in higher taxes. If lower taxes for a few are good for that few, then it follows that higher taxes for the rest must be bad for the rest. And this means that even if the subsidies promote success for the favored business, they will reduce the success of the other businesses and reduce the real incomes of consumers. In other words, the “economic development” promoted by government’s “subsidy hand” will be taken away by government’s “tax hand.” What the government giveth, the government taketh away. Oops.

Lower taxes for everybody work entirely differently. They change the incentives faced at the margin, causing people to work, save and invest more. The increased work effort causes more goods and services to be produced. The increased saving makes more financial resources available for investment by businesses. The increasing investment increases the amount of capital available for labor to work with, which makes labor more productive. This increased productivity causes employers to bid up wages, increasing workers’ real incomes.

Lest this process sound like a free lunch, it must be noted that unless the increased incomes are self-financing – that is, unless the increased incomes provide equivalent tax revenue at the lower rates – government will have to reduce spending in order to fulfill the conditions for stability. Since modern government is wildly inflated – heavily bureaucratized, over-administered and over-staffed as well as obese in size – this should not present a theoretical problem. In practice, though, the willingness to achieve this tradeoff is what has defined success and failure in economic development at the state and local level.

Markets Succeed. Governments Fail

EDIs fail because they are an attempt by government to improve on the workings of free markets. Free markets have only advantages while governments have only disadvantages. Free markets operate according to voluntary choice; governments coerce and compel. Voluntary choice allows people to adjust and fine-tune arrangements to suit their own happiness; compulsion makes no allowance for personal preference and individual happiness. Since human happiness is the ultimate goal, it is no wonder that markets succeed and governments fail.

Free markets convey vast amounts of information in the most economical way possible, via the price system. Since people cannot make optimal choices without possessing relevant information, it is no wonder that markets work. Governments suppress, alter and distort prices, thereby corrupting the informational content of prices. Indeed, the inherent purpose of EDIs is exactly to distort the information and incentives faced by particular businesses relative to the rest. It is no wonder, then, that governments fail.

Prices coordinate the activities of people in neighborhoods, cities, regions, states and countries. In order for coordination to occur, people should face the same prices, differing only by the costs of transporting goods from place to place. Free markets produce this condition. Governments deliberately interfere with this condition; EDIs are a classic case of this interference. No wonder that governments, and EDIs, fail.

DRI-173 for week of 1-25-15: Anti-Price-Gouging Laws: The Cure Is the Disease

An Access Advertising EconBrief:

Anti-Price-Gouging Laws: The Cure Is the Disease

This week, New York City Mayor Bill De Blasio announced an impending snowfall of two to three feet, accompanied by high winds. In anticipation of the upcoming blizzard, he slapped the city with a travel ban, effective at 11 PM on the following day. Only official snow-clearance and law-enforcement vehicles would be allowed on the streets. He seized the opportunity to remind New Yorkers that the travel emergency would trigger enforcement of New York State’s anti-price-gouging law, which forbids raising prices on goods and services beyond pre-emergency levels. Violations would be punished sternly, he assured his audience.

Oops. In the event, the blizzard forecast proved… er, optimistic in the quantitative sense or pessimistic in the qualitative sense. Snowfall fell short of one foot, causing no end of local grumbling by the ingrates who couldn’t simply be satisfied to avert disaster.

To economists, though, the real disaster isn’t the unavoidable inclement weather that strikes every year, nor is it the occasional failure of accurate weather forecasting. It is the self-infliction of wounds by laws passed to constrain a non-existent practice called “price-gouging.” The law purports to cure a non-existent ailment. The cure is far worse than anything the “disease” could inflict.

State Laws to Prevent and Punish “Price Gouging”

Nobody knows the origin of the term “price gouging.” It probably derives from the exercise of monopolies granted by monarchs under the old English common law, which is where we get the term “monopoly.” Since nobody could legally compete with them, they could figuratively gouge their price from the consumer’s hide without interference.

With the advent of big government in the 20th century, it was only a matter of time until this resentment of sellers was written into law. Legislatures needed a pretext for acting against sellers, though. Academia provided it in the 1930s with the “Imperfect Competition” revolution in economic theory. Led by Edward Chamberlain and Joan Robinson, this school pointed out that few, if any, actual markets corresponded to the textbook definition of a “perfect” market. Perfect competition required that no individual seller supply a sufficient quantity of output to materially influence market price through its pricing and output decisions. It also required that consumers view the output of each seller as homogeneous – otherwise, product quality might confer some degree of market (pricing) power on an individual seller. There should also be no barriers to entry into, or exit from, the market.

So all markets were “imperfect” and all sellers possessed “market power.” This homely truth gave the profession the small opening it needed to make a huge leap of logic: Most sellers were monopolists who must be restrained by the benevolent and enlightened force of government regulation from exercising their monopoly power. This conclusion provided a rationale for government intervention at the level of individual markets, or microeconomics. It was analogous to the role played in the 1930s by Keynesian economic theory in justifying government intervention at the macroeconomic level.

In World War II, the federal government’s Office of Price Administration (OPA) levied price controls, or maximum prices, on hundreds of industries. Although the public rationale for these controls was to prevent inflation, they served to accustom both the public and private business to the notion of government control of the price system. In practice, patriotism was at least as important in enforcing the price controls as inflation-control. Business owners who raised prices were open to charges of “war profiteering.” This was unpatriotic; it was “taking advantage of the crisis to make money” when they should have been “doing their part by sharing the sacrifice” borne by everybody else. In peacetime, the rationale of monopoly regulation could be slipped neatly into the vacuum left by inflation-fighting and patriotism.

In the late 1970s, the U.S. struggled in the throes of an “energy crisis.” The upward spike in oil prices initiated by the Organization of Petroleum Exporting Countries (OPEC) had hit Western industrialized nations hard. Threatened with across-the-board cost increases and associated widespread unemployment, their central banks chose the same remedy that is now being employed: rapid money creation. This created accelerating inflation but did not do much to resolve the unemployment problem. The melding of stagnation and unemployment gave rise to a hybrid term of disaffection, stagflation.

It was against this backdrop that home heating fuel prices in New York State rose dramatically in the fall of 1978. The evolving American tradition was to blame the seller for the underlying conditions of supply and demand giving rise to an existing price. That is just what the New York state legislature did when it passed the first state law proscribing price-gouging. It took four years for Hawaii to produce the second such law in 1983. Connecticut and Mississippi followed suit in 1986. Then came the deluge; eleven more states joined the party in the 1990s and sixteen more in the first decade of the new millennium. Today, 38 states have laws forbidding price-gouging in some form. Just what is it that these laws forbid, anyway?

Amazing as it seems, the answer is far from clear. But the common denominator between the laws is the notion that special circumstances or “emergency” justify a significant curtailment of pricing freedom. When we try to determine what the curtailment is, why it is justified and which circumstances qualify as emergencies, we find ourselves shrouded in ambiguity.

In a reasonable world, a judicial review of these statutes would undoubtedly find them void for vagueness. But that is hardly their worst drawback. Even if it were possible to objectively and precisely define an emergency and specify a quantitative curtailment of price tailored to it, we would not want to do anything so perverse and counterproductive even if we could.

The Economics of Emergency Behavior

How do people behave in emergencies? Why do governments and opponents of free markets object to that behavior? What kind of behavior is desirable in those circumstances?

Consider the example posed by the impending blizzard in New York City. In these situations, people routinely rush to acquire advance stocks of common everyday consumption goods. Included in this category are such goods are food (eggs, milk, water, ice, coffee, soft drinks, bacon, meat), fuel (gasoline, propane, heating oil) and household supplies (toilet paper, light bulbs, paper towels, batteries, radios, shovels, ice melt) and suitable clothing (heavy coats, gloves, hats, boots). The vast majority of this behavior is simply a reallocation of purchases in time, or an intertemporal reallocation of demand. There is nothing invidious or harmful about this. Indeed, it obeys the simply principle of preparedness that we all learned as children, whether in the Boy Scouts or in school.

Governments typically act as though this is the result of panic – as if, because everybody can’t immediately purchase everything they want from stocks immediately on hand, it must be a bad thing. But this is ridiculous. There is no reason to treat this increase in demand differently than any other increase in demand for any other reason. After all, those affected certainly have good reasons for wanting the extra stocks, with their government promising them that a blizzard of unprecedented proportions will certainly descend upon them! The only question is: What is the best way of getting the people the extra goods they need, allowing them to push their purchases forward in time to prepare for the emergency?

The Laws of Supply and Demand are the best means ever invented for solving that problem. They act automatically and immediately without the need for government action or intervention. The Law of Supply says that sellers will produce more output for sale at relatively higher prices. The Law of Demand says that buyers will wish to purchase less relatively high prices. When the blizzard announcement is made, people rush to stores and to their computers to make purchases. At the previously existing prices, people would be willing to purchase vastly increased quantities of goods. But they don’t get those vastly increased quantities – at least, not instantaneously. The Law of Supply says that sellers will be willing to supply larger quantities of output, all right – but only at higher prices. Well, at successively higher prices consumers are progressively less enthusiastic about buying more output – they still want more, mind you, just not as much as they would if price were held rock steady. Eventually, price will rise enough to equate the willingness of sellers to produce and sell more and the willingness of buyers to buy more. In this context, “eventually” means a matter of hours or a day or so.

Notice that the oft-expressed fears of government are shown to be groundless. There is no need for government to step in, regulate price or otherwise prevent an economic disaster caused by panic reactions to the weather disaster. Changes in price induce the necessary changes in behavior that do two things – cause sellers to produce and sell more goods and people to want less. The combination of those two things solves the problem.

There is a second kind of behavioral reaction common to some types of disaster emergencies, such as hurricanes and tornadoes. The disaster may cause large amounts of destruction. This may give rise to additional demand for goods for replacement purposes in addition to the intertemporal reallocation demand just analyzed. The replacement demand case is best considered as an addendum to the first case, by assuming that the price system has solved the reallocation demand adequately but now faces the problem of handling the replacement demand for goods that have been destroyed or damaged by the disaster. This will include many of the same goods mentioned above, but also capital goods and consumer durables such as homes, vehicles, buildings and infrastructure. The goods may be demanded in final form or may need to be reconstructed or repaired, in which case the inputs required will be In demand.

Replacement demand differs from reallocation demand in that the latter merely reallocates demand while the former increases it. Replacement demand actually increases the amount of goods demanded locally. There is obviously some scope for increasing the needed goods by drawing on local stocks and by drawing resources away from the production of other goods, as well as by pressing unused local resources into service. But the only way to fully satisfy replacement demand is by importing goods and resources into the local area from outside; that is, from other cities, states, regions and even countries.

Once again, the price system solves this problem. Higher local prices will increase profits locally; the higher local profits will attract resources from other cities, states and regions. The increase in resources will comprise an increase in supply that will reduce the shortfall in replacement goods. As long as a shortfall exists, local demand will keep prices high. Those high prices will keep profits high enough to attract outside resources. If the affected area is large enough and the time frame long enough, international investment may even be attracted to the area. Only when the shortfall in replacement demand is eliminated will prices no longer signal the need for an inflow of goods and resources from outside.

The cases of reallocation and replacement demand do not exhaust all the possibilities created by emergencies and disasters, but they do handle those targeted by state price-gouging laws. We can see that those laws are a clear case of reinventing the wheel. So far, so bad. Now – how does the reinvention work?

Anti-Price-Gouging Laws: Reinventing the Wheel Square 

So the price system solves the problem posed by the need for emergency disaster planning. Is it possible that anti-price-gouging laws might solve it better? Or fairer?

Anti-price-gouging laws are intended to stop price from rising or, more precisely, to stop price from rising beyond a certain point. In the analysis presented above, the price system solved the problem of emergency disaster planning precisely through the medium of an increasing price. Thus, the laws are an economic contradiction in terms. They seek to solve a problem by denying the solution to the problem. So the only way anti-price-gouging laws could improve on the price system would be by substituting another solution for price increases as a means of getting more goods and resources and persuading people to want fewer goods and resources.

They do not substitute any alternative solution. There is no alternative solution. Instead, they assert that an alternative state of affairs – a lower price and fewer goods and resources – ought to be preferred to the one that the price system would bring about. The laws do not explain or justify the superiority of the alternative they exalt. They just assert it.

The laws are justified by rhetoric. The rhetoric claims to be protecting consumers against rapacious sellers who are taking advantage of them by raising prices in an emergency. This contravenes the basic logic of economic exchange, which says that exchange occurs between a willing buyer and a willing seller. So how can either one be “taken advantage of?” The laws assert that it is “unfair” to charge higher prices in an emergency than under non-emergency conditions. This also contravenes established legal precedent, which defines a “fair price” as one agreed upon by a willing buyer and a willing seller. So how can such a price be “unfair?” It also contravenes centuries of human behavior, during which higher prices have been charged for emergency medicine than non-emergency medicine, for emergency hotel rooms than non-emergency hotel rooms and so on.

Since particular anti-price-gouging laws specify exact limits on price increases during emergencies compared to pre-emergency prices, it behooves us to deal with this specific issue. Take the example of a 10% limit on price increases – which, as it happens, is the limit imposed in more than one state. The emphasis placed on this number by proponents is the “fairness” of a 10% gross margin. But this is a non-sequitur. In the first place, there is not and never has been any objective standard of fairness by which 10% (or any other number) could be adjudged fair.

Now go beyond the issue of fairness to consider the internal logic of the process itself. We previously discussed the dynamic reactions of sellers and buyers, in which each group react to the rising price by, respectively, increasing output and reducing desired purchases while continuing to want more than is available. As price goes up by 1%, 2%, 5%, 9%…the law and its proponents approve the outcomes. But suddenly when the price hits 10% – bang! This adjustment process must stop even when some sellers and buyers want it to continue. This is self-contradictory nonsense; law proponents cannot justify this arbitrary limit without explaining why production and sale above the 10% limit is wrong while it is right below the limit.

Proponents argue that complaints by the citizenry justify restriction of high-priced production and sales. But complaints about speech don’t justify arbitrary restrictions on the First Amendment. The law has not traditionally allowed third parties to prohibit economic transactions among consenting transactors except on moral grounds – and anti-price-gouging laws make no valid moral case.

Another way of looking at this same example is to ask: Why do the laws allow any price increases at all? It would seem that proponents are guiltily aware that people want and need more goods and resources and that price increases are necessary for provision of them. As in the age-old joke (“we’ve already established what you are; now we’re just arguing about the price”), anti-price-gouging proponents have implicitly given up and recognized the truth about economic logic, but are determined to argue about the price of those additional goods and how it is determined.

To sum up briefly, anti-price-gouging laws do not solve the problem they purport to address because they do nothing whatever to provide more goods and resources to local areas affected by disaster emergencies. The rhetoric they assert in support of their claims of fairness – which attempt to persuade constituents that they should be happier with lower prices and fewer goods and resources – is illogical and contradicts common practice and long historical experience.

Anti-price-gouging laws not only reinvent the wheel – they reinvent it square.

Black Markets and Other Costs of Shortages

In wartime, which we can view as the ultimate emergency, governments commonly levy comprehensive wage and price controls that prevent prices from rising at all. These are even more draconian than current anti-price-gouging laws. Governments print money to finance the expenditures necessary to conduct the war. When the printed money finds its way into the income stream, it forms the basis for additional demand for goods and services. Citizens attempt to bid up the prices for goods and services. But the price controls do not allow this to happen. The result is chronic shortages.

Economists know this process well. Every microeconomics textbook describes it. Buyers must incur substantial “shoeleather costs” associated with being first in line to get goods and services; failing to do so may frustrate their purchase desires. Those costs are an increase in the effective economic price paid for the good or service. The quality of goods and services is degraded as sellers try to reduce quality as an alternative to raising price. The existence of a shortage allows sellers to pick and choose the buyers who will be satisfied and those who will be disappointed. If sellers have a taste for discrimination, they can exercise it freely. In efficiently functioning competitive markets, on the other hand, this taste is severely constrained by the fact that market-clearing penalized a seller who discriminates against a willing buyers. The output not sold to that buyer may go unsold – either permanently or for a long interval.

Most pertinent to the example of anti-price-gouging laws is the case of black markets. A short-run shortage means that the highest price a consumer would be willing to pay to get one more unit of the good or service is well above the market-clearing price that would prevail if government had never slapped on the controls in the first place. Of course, that extra-high price is not a legal price. But in an emergency, some consumers will be willing to disregard legal niceties to get their hands on the good or service. And sellers will be willing to violate the law to earn the super-high rate of profit that this super-high price will generate. Thus, conditions are perfect for existence of a black (illegal) market.

By passing anti-price-gouging laws, governments deliberately create the ideal environment for black markets. When black markets flourish, politicians put on their sternest face and solemnly promise to punish the evil, greedy malefactors.

Recent Attempts to Rehabilitate Anti-Price-Gouging Laws

Opponents of free markets now control the political process throughout the world. They hold the upper hand in public discourse. Emboldened by their superior status, they have recently sought to rehabilitate the long-moribund intellectual case for anti-price-gouging laws. We can best summarize these attempts by quoting from a prominent source. Harvard University political philosophy Professor Michael Sandel’s book What’s the Right Thing to Do? argues that economists stress economic welfare and freedom at the expense of virtue.

“Emotion is relevant,” claims Sandel – thereby rejecting millennia of philosophical argument in favor of reason and against emotion. Proponents of anti-price-gouging laws reflect “something more visceral than welfare or freedom. People are outraged at ‘vultures’ who prey on the desperation of others and want them punished, not rewarded with windfall profits… Outrage of this kind is anger at injustice… Greed is a vice, a bad way of being, especially when it makes people oblivious to the suffering of others… Price-gouging laws cannot banish greed, but they can at least restrain its most brazen expression, and signal society’s disapproval of it.” Sandel champions the idea of “shared sacrifice for the common good.”

Sandel’s ideas encapsulate the quintessence of 20th-century liberal though – pure undifferentiated emotion, all logic and intellectual distinctions distilled out. If “emotion is relevant,” where do we start and stop in admitting it into the argument? Obviously, we start with the political constituents of liberals and stop when all its political opponents have been demonized. Subjective terms of opprobrium like “vultures,” “greed” and “windfall profits” have no objective correlative in science or logic. The behavior he complains of has specific economic value in achieving the goals of those he pretends to champion; that is, the greed of the vultures turns out to benefit the outraged sufferers and the windfall profits are the necessary by-product of their deliverance.

When goods and resources flow to the disaster area from the outside, people on the outside have fewer goods and services and those within the disaster area have more. This is real shared sacrifice in true economic terms, not the phony symbolic shared sacrifice Sandel pontificates about. This is the price system at work.

Are emergency-room doctors vultures? Do ambulance companies earn windfall profits? Are hotel owners greedy? They participate in everyday, routine market transactions in which prices rise in response to special, emergency circumstances. Why aren’t they accused of being evil and immoral?

Because there’s no political profit in it, that’s why. So much for the “new” arguments for anti-price-gouging laws, same as the old.

If Governments Know the Truth, Why Do They Enact Anti-Price-Gouging Laws?

It is obvious that governments know the economic truth about anti-price-gouging laws – otherwise, they would not allow any price increases at all during emergencies. So why do they insist on enacting laws that can only hurt people without helping them?

The answer is depressingly clear. In the environment of big government and absolute democracy, governments exist to further their own power, not to serve the needs of the public at large. Proponents of anti-price-gouging laws are opponents of free markets. These people constitute a special interest. Government serves this special interest by serving its own interest; e.g., the interests of politicians, bureaucrats and government employees.

Politicians observe the protests of anti-free-market groups. They respond with alacrity by promising to restore “fairness” with anti-price-gouging laws. Of course, this will require new laws. The laws will require a new agency or expansion of an existing agency. This will require hiring more employees and more administrators, as well as a bureaucrat to oversee them. Legislators will oversee the budget of the agency. The agency will exert power over all the businesses in the state at any time designated as an “emergency.” Legislators have the privilege of deciding what constitutes an emergency, giving them additional power over those businesses. Politicians will curry favor with the public by posing as a public savior and benefactor during every emergency, rather than being excoriated for taking a “do nothing” stance.

Government is in the business of producing more government, not benefitting the public. It benefits only that subset of the public directly connected with itself. That general rule applies to anti-price-gouging laws as it does to all other aspects of government not strictly within its true, narrow province.

There is no objective crime or bad outcome called “price gouging.” But the laws enacted to prevent it do have objectively bad outcomes and therefore constitute an evil in themselves.

DRI-172 for week of 1-18-15: Consumer Behavior, Risk and Government Regulation

An Access Advertising EconBrief: 

Consumer Behavior, Risk and Government Regulation

The Obama administration has drenched the U.S. economy in a torrent of regulation. It is a mixture of new rules formulated by new regulatory bodies (such as the Consumer Financial Protection Bureau), new rules levied by old, preexisting federal agencies (such as those slapped on bank lending by the Federal Reserve) and old rules newly imposed or enforced with new stringency (such as those emanating from the Department of Transportation and bedeviling the trucking industry).

Some people within the business community are pleased by them, but it is fair to say that most are not. But the President and his subordinates have been unyielding in his insistence that they are not merely desirable but necessary to the health, well-being, vitality and economic growth of America.

Are the people affected by the regulations bad? Do the regulations make them good, or merely constrain their bad behavior? What entitles the particular people designing and implementing the regulations to perform in this capacity – is it their superior motivations or their superior knowledge? That is, are they better people or merely smarter people than those they regulate? The answer can’t be democratic election, since regulators are not elected directly. We are certainly entitled to ask why a President could possibly suppose that some people can effectively regulate an economy of over 300 million people. If they are merely better people, how do we know that their regulatory machinations will succeed, however well-intentioned they are? If they are merely smarter people, how do we know their actions will be directed toward the common good (whatever in the world that might be) and not toward their own betterment, to the exclusion of all else? Apparently, the President must select regulators who are both better people and smarter people than their constituents. Yet government regulators are typically plucked from comparative anonymity rather than from the firmament of public visibility.

Of all American research organizations, the Cato Institute has the longest history of examining government regulation. Recent Cato publications help rebut the longstanding presumptions in favor of regulation.

The FDA Graciously Unchains the American Consumer

In “The Rise of the Empowered Consumer” (Regulation, Winter 2014-2015, pp.34-41, Cato Institute), author Lewis A. Grossman recounts the Food and Drug Administration’s (FDA) policy evolution beginning in the mid-1960s. He notes that “Jane, a [hypothetical] typical consumer in 1966… had relatively few choices” across a wide range of food-products like “milk, cheese, bread and jam” because FDA’s “identity standards allowed little variation.” In other words, the government determined what kinds of products producers were allowed to legally produce and sell to consumers. “Food labels contained barely any useful information. There were no “Nutrition Facts” panels. The labeling of many foods did not even include a statement of ingredients. Nutrient content descriptors were rare; indeed, the FDA prohibited any reference whatever to cholesterol. Claims regarding foods’ usefulness in preventing disease were also virtually absent from labels; the FDA considered any such statement to render the product an unapproved – and thus illegal – drug.”

Younger readers will find the quoted passage startling; they have probably assumed that ingredient and nutrient-content labels were forced on sellers over their strenuous objections by noble and altruistic government regulators.

Similar constraints bound Jane should she have felt curiosity about vitamins, minerals or health supplements. The types and composition of such products were severely limited and their claims and advertising were even more severely limited by the FDA. Over-the-counter medications were equally limited – few in number and puny in their effectiveness against such infirmities as “seasonal allergies… acid indigestion…yeast infection[s] or severe diarrhea.” Her primary alternative for treatment was a doctor’s visit to obtain a prescription, which included directions for use but no further enlightening information about the therapeutic agent. Not only was there no Internet, copies of the Physicians’ Desk Reference were unavailable in bookstores. Advertising of prescription medicines was strictly forbidden by the FDA outside of professional publications like the Journal of the American Medical Association.

Food substances and drugs required FDA approval. The approval process might as well have been conducted in Los Alamos under FBI guard as far as Jane was concerned. Even terminally ill patients were hardly ever allowed access to experimental drugs and treatments.

From today’s perspective, it appears that the position of consumers vis-à-vis the federal government in these markets was that of a citizen in a totalitarian state. The government controlled production and sale; it controlled the flow of information; it even controlled the life-and-death choices of the citizenry, albeit with benevolent intent. (But what dictatorship – even the most savage in history – has failed to reaffirm the benevolence of its intentions?) What led to this situation in a country often advertised as the freest on earth?

In the late 19th and early 20th centuries, various incidents of alleged consumer fraud and the publicity given them by various muckraking authors led Progressive administrations led by Theodore Roosevelt, William Howard Taft and Woodrow Wilson to launch federal-government consumer regulation. The FDA was the flagship creation of this movement, the outcome of what Grossman called a “war against quackery.”

Students of regulation observe this common denominator. Behind every regulatory agency there is a regulatory movement; behind every movement there is an “origin story;” behind every story there are incidents of abuse. And upon investigation, these abuses invariably prove either false or wildly exaggerated. But even had they been meticulously documented, they would still not substantiate the claims made for them and not justify the regulatory actions taken in response.

Fraud was illegal throughout the 19th and 20th century and earlier. Competitive markets punish producers who fail to satisfy consumers by putting the producers out of business. Limiting the choices of producers and consumers harms consumers without providing compensating benefits. The only justification for FDA regulation of the type provided for the first half of the 20th century was that government regulators were omniscient, noble and efficient while consumers were dumbbells. That is putting it baldly but it is hardly an overstatement. After all, consider the situation that exists today.

Plentiful varieties of products exist for consumers to pick from. They exist because consumers want them to exist, not because the FDA decreed their existence. Over-the-counter medications are plentiful and effective. The FDA tries to regulate their uses, as it does for prescription medications, but thankfully doctors can choose from a plethora of “off-label” uses. Nutrient and ingredient labels inform the consumer’s quest to self-medicate such widespread ailments as Type II diabetes, which spread to near-epidemic status but is now being controlled thanks to rejection of the diet that the government promoted for decades and embrace of a diet that the government condemned as unsafe. Doctors and pharmacists discuss medications and supplements with patients and provide information about ingredients, side effects and drug interactions. And patients are finally rising in rebellion against the tyranny of FDA drug approval and the pretense of compassion exhibited by the agency’s “compassionate use” drug-approval policy for patients facing life-threatening diseases.

Grossman contrasts the totalitarian policies of yesteryear with the comparative freedom of today in polite academic language. “The FDA treated Jane’s… cohort…as passive, trusting and ignorant consumers. By comparison, [today’s consumer] has unmediated [Grossman means free] access to many more products and to much more information about those products. Moreover, modern consumers have acquired significant influence over the regulation of food and drugs and have generally exercised that influence in ways calculated to maximize their choice.”

Similarly, he explains the transition away from totalitarianism to today’s freedom in hedged terms. To be sure, the FDA gave up much of its power over producers and consumers kicking and screaming; consumers had to take all the things listed above rather than receive them as the gifts of a generous FDA. Nevertheless, Grossman insists that consumers’ distrust of the word “corporation” is so profound that they believe that the FDA exerts some sort of countervailing authority to ensure “the basic safety of products and the accuracy and completeness of labeling and advertising.” This concerning an agency that fought labeling and advertising tooth and claw! As to safety, Grossman makes the further caveat that consumers “prefer that government allow consumers to make their own decisions regarding what to put in their bodies…except in cases in which risk very clearly outweighs benefit” [emphasis added]. That implies that consumers believe that the FDA has some special competence to assess risks and benefits to individuals, which completely contradicts the principle that individuals should be free to make their own choices.

Since Grossman clearly treats consumer safety and risk as a special case of some sort, it is worth investigating this issue at special length. We do so below.

Government Regulation of Cigarette Smoking

For many years, individual cigarette smokers sued cigarette companies under the product-liability laws. They claimed that cigarettes “gave them cancer,” that the cigarette companies knew it and that consumers didn’t and that the companies were liable to selling dangerous products to the public.

The consumers got nowhere.

To this day, an urban legend persists that this run of legal success was owed to deep financial pockets and fancy legal footwork. That is nonsense. As the leading economic expert on risk (and the longtime cigarette controversy), W. Kip Viscusi, concluded in Smoke-Filled Rooms: A Postmortem on the Tobacco Deal, “the basic fact is that when cases reached the jury, the jurors consistently concluded that the risks of cigarettes were well-known and voluntarily incurred.”

In the early 1990s, all this changed. States sued the tobacco companies for medical costs incurred by government due to cigarette smoking. The suits never reached trial. The tobacco companies settled with four states; a Master Settlement Agreement applied to remaining states. The aggregate settlement amount was $243 billion, which in the days before the Great Recession, the Obama administration and the Bernanke Federal Reserve was a lot of money. (To be sure, a chunk of this money was gobbled up by legal fees; the usual product-liability portion is one-third of the settlement, but gag orders have hampered complete release of information on lawyers’ fees in these cases.)

However, the states were not satisfied with this product-liability bonanza. They increased existing excise taxes on cigarettes. In “Cigarette Taxes and Smoking,” Regulation (Winter 2014-2015, pp. 42-46, Cato Institute), authors Kevin Callison and Robert Kaestner ascribe these tax increases to “the hypothesis… that higher cigarette taxes save a substantial number of lives and reduce health-care costs by reducing smoking, [which] is central to the argument in support of regulatory control of cigarettes through higher cigarette taxes.”

Callison and Kaestner cite research from anti-smoking organizations and comments to the FDA that purport to find price elasticities of demand for cigarettes of between -0.3 and -0.7 percent, with the lower figure applying to adults and the higher to adolescents. (The words “lower” and “higher” refer to the absolute, not algebraic, value of the elasticities.) Price elasticity of demand is defined as the percentage change in quantity demanded associated with a 1 percent change in price. Thus, a 1% increase in price would cause quantity demanded to fall by between 0.3% and 0.7% according to these estimates.

The problem with these estimates is that they were based on research done decades ago, when smoking rates were much higher. The authors estimate that today’s smokers are mostly the young and the poorly educated. Their price elasticities are very, very low. Higher cigarette taxes have only a miniscule effect on consumption of cigarettes. They do not reduce smoking to any significant extent. Thus, they do not save on health-care costs.

They serve only to fatten the coffers of state governments. Cigarette taxes today play the role played by the infamous tax on salt levied by French kings before the French Revolution. When the tax goes up, the effective price paid by the consumer goes up. When consumption falls by a much smaller percentage than the price increase, tax revenues rise. Both the cigarette-tax increase of today and the salt-tax increases of the 17th and 18th century were big revenue-raisers.

In the 1990s, tobacco companies were excoriated as devils. Today, though, several of the lawyers who sued the tobacco companies are either in jail for fraud, under criminal accusation or dead under questionable circumstances. And the state governments who “regulate” the tobacco companies by taxing them are now revealed as merely in it for the money. They have no interest in discouraging smoking, since it would cut into their profits if smoking were to fall too much. State governments want smoking to remain price-inelastic so that they can continue to raise more revenue by raising taxes on cigarettes.

 

Can Good Intentions Really Be All That Bad? The Cost of Federal-Government Regulation

The old saying “You can’t blame me for trying” suggests that there is no harm in trying to make things better. The economic principle of opportunity cost reminds us that the use of resources for one purpose – in this case, the various ostensibly benevolent and beneficent purposes of regulation – denies the benefits of using them for something else. So how costly is that?

In “A Slow-Motion Collapse” (Regulation, Winter 2014-2015, pp. 12-15, Cato Institute), author Pierre Lemieux cites several studies that attempted to quantify the costs of government regulation. The most comprehensive of these was by academic economists John Dawson and John Seater, who used variations in the annual Code of Federal Regulations as their index for regulatory change. In 1949, the CFR had 19,335 pages; in 2005, this total has risen to 134,261 pages, a seven-fold increase in six-plus decades. (Remember, this includes federal regulation only, excluding state and local government regulation, which might triple that total.)

Naturally, proponents of regulation blandly assert that the growth of real income (also roughly seven-fold over the same period) requires larger government, hence more regulation, to keep pace. This nebulous generalization collapses upon close scrutiny. Freedom and free markets naturally result in more complex forms of goods, services and social interactions, but if regulatory constraints “keep pace” this will restrain the very benefits that freedom creates. The very purpose of freedom itself will be vitiated. We are back at square one, asking the question: What gives regulation the right and the competence to make that sort of decision?

Dawson and Seater developed an econometric model to estimate the size of the bite taken by regulation from economic growth. Their estimate was that it has reduced economic growth on average by about 2 percentage points per year. This is a huge reduction. If we were to apply it to the 2011 GDP, it would work as follows: Starting in 1949, had all subsequent regulation not happened, 2011 GDP would have been 39 trillion dollars higher, or about 54 trillion. As Lemieux put it: “The average American (man, woman and child) would now have about $125,000 more per year to spend, which amounts to more than three times [current] GDP per capita. If this is not an economic collapse, what is?”

Lemieux points out that, while this estimate may strain the credulity of some, it also may actually incorporate the effects of state and local regulation, even though the model itself did not include them in its index. That is because it is reasonable to expect a statistical correlation between the three forms of regulation. When federal regulation rises, it often does so in ways that require corresponding matching or complementary state and local actions. Thus, those forms of regulation are hidden in the model to some considerable degree.

Lemieux also points to Europe, where regulation is even more onerous than in the U.S. – and growth has been even more constipated. We can take this reasoning even further by bringing in the recent example of less-developed countries. The Asian Tigers experienced rapid growth when they espoused market-oriented economics; could their relative lack of regulation supplement this economic-development success story? India and mainland China turned their economies around when they turned away from socialism and Communism, respectively; regulation still hamstrings India while China is dichotomized into a relatively autonomous small-scale competitive sector and a heavily regulated and planned government controlled big-business economy. Signs point to a recent Chinese growth dip tied to the bursting of a bubble created by easy money and credit granted to the regulated sector.

The price tag for regulation is eye-popping. It is long past time to ask ourselves why we are stuck with this lemon.

Government Regulation as Wish-Fulfillment

For millennia, children have cultivated the dream fantasies of magical figures that make their wishes come true. These apparently satisfy a deep-seated longing for security and fulfillment. Freud referred to this need as “wish fulfillment.” Although Freudian psychology has long ago been discredited, the term retains its usefulness.

When we grow into adulthood, we do not shed our childish longings; they merely change form. In the 20th century, motion pictures became the dominant art form in the Western world because they served as fairy tales for adults by providing alternative versions of reality that were preferable to daily life.

When asked by pollsters to list or confirm the functions regulation should perform, citizens repeatedly compose “wish lists” that are either platitudes or, alternatively, duplicate the functions actually approximated by competitive markets. It seems even more significant that researchers and policymakers do exactly the same thing. Returning to Lewis Grossman’s evaluation of the public’s view of FDA: “Americans’ distrust of major institutions has led them to the following position: On the one hand, they believe the FDA has an important role to play in ensuring the basic safety of products and the accuracy and completeness of labeling and advertising. On the other hand, they generally do not want the FDA to inhibit the transmission of truthful information from manufacturers to consumers, and – except in cases in which risk very clearly outweighs benefit – they prefer that the government allow consumers to make their own decisions regarding what to put in their own bodies.”

This is a masterpiece of self-contradiction. Just exactly what is an “important role to play,” anyway? Allowing an agency that previously denied the right to label and advertise to play any role is playing with fire; it means that genuine consumer advocates have to fight a constant battle with the government to hold onto the territory they have won. If consumers really don’t want the FDA to “inhibit the transmission of truthful information from manufacturers to consumers,” they should abolish the FDA, because free markets do the job consumers want done by definitionand the laws alreadyprohibit fraud and deception.

The real whopper in Grossman’s summary is the caveat about risk and benefit. Government agencies in general and the FDA in particular have traditionally shunned cost/benefit and risk/benefit analysis like the plague; when they have attempted it they have done it badly. Just exactly who is going to decide when risk “very clearly” outweighs benefit in a regulatory context, then? Grossman, a professional policy analyst who should know better, is treating the FDA exactly as the general public does. He is assuming that a government agency is a wish-fulfillment entity that will do exactly what he wants done – or, in this case, what he claims the public wants done – rather than what it actually does.

Every member of the general public would scornfully deny that he or she believes in a man called Santa Claus who lives at the North Pole and flies around the world on Christmas Eve distributing presents to children. But for an apparent majority of the public, government in general and regulation in particular plays a similar role because people ascribe quasi-magical powers to them to fulfill psychological needs. For these people, it might be more apropos to view government as “Mommy” or “Daddy” because of the strength and dependent nature of the relationship.

Can Government Control Consumer Risk? The Emerging Scientific Answer: No 

The comments of Grossman, assorted researchers and countless other commentators and onlookers over the years imply that government regulation is supposed to act as a sort of stern, but benevolent parent, protecting us from our worst impulses by regulating the risks we take. This is reflected not only in cigarette taxes but also in the draconian warnings on the cigarette packages and in numerous other measures taken by regulators. Mandatory seat belt laws, adopted by state legislatures in 49 states since the mid-1980s at the urging of the federal government, promised the near-elimination of automobile fatalities. Government bureaucracies like Occupational Safety and Health Administration have covered the workplace with a raft of safety regulations. The Consumer Product Safety Commission presides with an eagle eye over the safety of the products that fill our market baskets.

In 1975, University of Chicago economist Sam Peltzman published a landmark study in the Journal of Political Economy. In it, Peltzman revealed that the various devices and measures mandated by government and introduced by the big auto companies in the 1960s had not actually produced statistically significant improvements in safety, as measured by auto fatalities and injuries. In particular, use of the new three-point seat belts seemed to show a slight improvement in driver fatalities that was more than offset by a rise in fatalities to others – pedestrians, cyclists and possibly occupants of victim vehicles. Over the years, subsequent research confirmed Peltzman’s results so repeatedly that former Chairman of the Council of Economic Advisors’ N. Gregory Mankiw dubbed this the “Peltzman Effect.”

A similar kind of result emerged throughout the social sciences. Innovations in safety continually failed to produce the kind of safety results that experts anticipated and predicted, often failing to provide any improved safety performance at all. It seems that people respond to improved safety by taking more risk, thwarting the expectations of the experts. Needless to say, this same logic applies also to rules passed by government to force people to behave more safely. People simply thwart the rules by finding ways to take risk outside the rules. When forced to wear seat belts, for example, they drive less carefully. Instead of endangering only themselves by going beltless, now they endanger others, too.

Today, this principle is well-established in scientific circles. It is called risk compensation. The idea that people strike to maintain, or “purchase,” a particular level of risk and hold it constant in the face of outside efforts to change it is called risk homeostasis.

These concepts make the entire project of government regulation of consumer risk absurd and counterproductive. Previously it was merely wrong in principle, an abuse of human freedom. Now it is also wrong in practice because it cannot possibly work.

Dropping the Façade: the Reality of Government Regulation

If the results of government regulation do not comport with its stated purposes, what are its actual purposes? Are the politicians, bureaucrats and employees who comprise the legislative and executive branches and the regulatory establishment really unconscious of the effects of regulation? No, for the most part the beneficiaries of regulation are all too cynically aware of the façade that covers it.

Politicians support regulation to court votes from the government-dependent segment of the voting public and to avoid being pilloried as killers and haters or – worst of all – a “tool of the big corporations.” Bureaucrats tacitly do the bidding of politicians in their role as administrators. In return, politicians do the bidding of bureaucrats by increasing their budgets and staffs. Employees vote for politicians who support regulation; in return, politicians vote to increase budgets. Employees follow the orders of bureaucrats; in return, bureaucrats hire bigger staffs that earn them bigger salaries.

This self-reinforcing and self-supporting network constitutes the metastatic cancer of big government. The purpose of regulation is not to benefit the public. It is to milk the public for the benefit of politicians, bureaucrats and government employees. Regulation drains resources away from and hamstrings the productive private economy.

Even now, as we speak, this process – aided, abetted and drastically accelerated by rapid money creation – is bringing down the economies of the Western world around our ears by simultaneously wreaking havoc on the monetary order with easy money, burdening the financial sector with debt and eviscerating the real economy with regulations that steadily erode its productive potential.

DRI-161 for week of 11-30-14: The Enemy Within: The Move to Strangle Welfare-State Reform In Its Crib

An Access Advertising EconBrief: 

The Enemy Within: The Move to Strangle Welfare-State Reform In Its Crib

The resurgence of the Republican Party after the overwhelming victory of Barack Obama and the Democrats in the 2008 elections was led by the Tea Party. This grassroots political movement began as a popular uprising and only gradually acquired formal organizational trappings. As yet, its ideological roots are so thin and shallow that they provide no support for the movement.

This contrasts sharply with the conservative movement, in which the order of development was reversed. Ideology came first, with roots implanted firmly by opposition to the New Deal and a foreign policy led by Sen. Robert Taft. The intellectual foundation laid by William F. Buckley, Jr. in National Review Magazine educated a generation of young Republicans and paved the way for the candidacy of Barry Goldwater in 1964. Goldwater’s landslide defeat nevertheless introduced Ronald Reagan to national politics. By the time Reagan became President in 1980, conservatism had become the dominant political paradigm.

Nowhere is a vacuum more abhorrent than in political ideology. Today’s victorious Republicans may purport to search for a mode of governance, but what they are really doing is belatedly deciding what they stand for. (The hapless domestic and foreign policies of the Obama administration gave them the luxury of winning the elections merely by signaling their lack of congruence with President Obama et al.) They enjoy a surfeit of advice from all quarters.

Nowhere is this advice more pointed than in its economic dimension.

 

Should Republicans “Take ‘Yes’ For an Answer?”

 

Although Buckley died in 2006, National Review still retains some of the intellectual momentum he generated. Its “Roving Correspondent,” Kevin Williamson, devoted a recent essay to an advisory for the Republican Party on post-victory strategy. Williamson sees the solid victory in the 2014 mid-term elections as “a chance to meet voters where they are.” To do that, Republicans need to “take ‘yes’ for an answer.”

Exactly how should we interpret these glib formulations? Williamson insists that Republicans should not treat electoral good fortune as the opportunity to create change. Instead, the Party should reverse the normal order of precedence and cater to popular disposition – “meet the voters where they are” instead of persuading the voters of the desirability or necessity of change. Don’t continue the campaign, Williamson pleads. The votes have already been counted; just “take ‘yes’ for an answer” and get on with the business of crafting a governing compromise that everybody can live with.

So much for the revolutionary stance of the Tea Party; the EPA won’t have to test BostonHarbor for caffeine contamination.

The reader’s instinctive reaction to Williamson’s essay is to flip the magazine over and re-check the cover. Can this really be National Review, legendary incubator of conservative thought, renowned for taking no prisoners in the ideological wars? We have just suffered six years under the lash of a Democrat regime whose marching order was “elections have consequences.” Now the flagship of American conservatism is preaching a gospel of preemptive surrender?

Williamson’s mood is apparently the product of disillusionment. The birth of NR, he reminds his readers, was a reaction to Eisenhower Republicanism. Instead of rolling back the welfare state installed by Roosevelt and Truman, Ike accepted it – thereby setting the tone for Republican policy thereafter. The magazine fulminated, but to no avail. Goldwaterism produced Reagan… “a self-described New Deal Democrat,” pouts Williamson, “who famously proclaimed that he hadn’t left eh Democratic Party but the party had left him.”

Reagan revisionism is part of a new NR realpolitik, it seems. “At the end of the Reagan years, the Soviet Union was dead on its feet, the United States was a resurgent force in the world… and spending and deficits both were up, thanks to the White House’s inability or unwillingness to put a leash on Tip O’Neill and congressional Democrats. The public sector was larger and more arrogant, there were more rather than fewer bureaucrats and bureaucracies, and nobody had made so much as a head fake in the direction of reforming such New Deal legacies as Social Security or even Great Society boondoggles such as Medicaid.”

The author’s psychological defeatism apparently so overwhelmed him that he lost touch with reality. The Soviet Union is “dead on its feet” but the singular responsibility of President Ronald Reagan for this fact is unmentioned. (One cannot help wondering whether this is an oversight or a deliberate omission.) But Reagan is held liable for the actions of the Democrat Speaker of the House and Congressional Democrats! Has anybody blamed Barack Obama for not “putting a leash on House Republicans” to achieve more of his agenda? Has Williamson published his Canine Theory of Congressional Fiscal Restraint in a peer-reviewed journal of political science?

One might have thought that winning the Cold War, taming hyperinflation and reviving moribund economic growth (also left unmentioned by Williamson) constituted sufficient labor unto a Presidential tenure. Various authors, ranging from Paul Craig Roberts to David Stockman, have chronicled the internecine warfare attending the Reagan administration’s efforts to cut the federal budget. Apparently Williamson has forgotten, if he ever knew, that Reagan enjoyed the reputation of a ferocious budget-cutter while in office. This dovetailed with his famous declaration that “government isn’t the solution – it’s the problem.” If, three decades after the fact, Reagan’s efforts seem puny, this may be because we hold him responsible for failing to effect a counterrevolution to match the permanency of FDR’s New Deal. One would think, though, that the only President since FDR to actually reduce the size of the Federal Register deserved better at Williamson’s hands.

Obviously, Williamson paints a false portrait of the Reagan years to justify the counsel of despair he gives today. “We did not undo the New Deal in the 1980s. We are not going to undo the New Deal before 2017 either… the fact remains that the American people are not as conservative as conservatives would like them to be, nor are they always conservative in the way conservatives would like them to be.” It seems that there is a “disconnect between the numbers of Americans who describe themselves as ‘conservative’ or ‘liberal’ and the policy preferences those Americans express.” Americans think of themselves as conservative but favor liberal policies. So, Williamson concludes, the only sensible thing to do is humor them.

“Americans …are, by and large, conservative in the same sense that Ronald Reagan was, not in the sense that Robert Taft was, or… Barry Goldwater was. They intuit that the federal government is overly large and intrusive, they resent the slackers and idlers who exploit that situation, and they worry that our long-term finances are upside down, but they do not wish to repeal the New Deal.”

“Example: A majority of voters believe that something must be done to rectify Social Security’s finances, and a plurality of voters believe that a combination of benefit cuts and tax increases should be adopted to achieve that… [but] strong majorities … of 56 percent… oppose Social Security benefit cuts and Social Security tax increases, according to Gallup. No doubt many of these voters think of themselves as conservatives… it is likely that the great majority of self-described conservatives would support continuing current Social Security policies indefinitely – if they believed it fiscally possible. The current Left-Right divide on Social Security is not a question of what we ought to do, but of what we can do.” Williamson cites Robert Taft’s eventual concession on Social Security as an example of the Right bending its principles to his form of pragmatism. After all, “populist measures are, to the surprise of nobody except scholars of political science, popular, hence the support among a majority of registered Republicans for raising the minimum wage.”

Instead of fighting among themselves on principle, Williamson contends, Republicans should be scanning the polls to find out where their base stands – and adjusting their stance accordingly. They should be meeting the voters where the voters are rather than persuading voters to see the light of sweet reason. They should take “yes” for an answer when they hear it from the networks on election night.

Rebutting Williamson’s “Populism”

 

No full-blooded Tea Party member will swallow Kevin Williamson’s argument, despite the author’s insistence that he is really enunciating their position. They didn’t overcome the twin obstacles of the Democrat Party and the Republican establishment only to be lectured on their extremism in the pages of National Review, for crying out loud. But we must go beyond visceral rejection of Williamson’s moral and psychological defeatism. Straightforward analysis indicts it.

Since the venue is National Review, it is fitting to recall Bill Buckley’s distinction between politics and economics: “The politician says: ‘What do you want? The economist says: What do you want the most?'” For many decades, voters have been offered big government as if it were a consumer product with zero price. That is the context in which to contemplate the poll responses that Williamson treats as commandments graven in stone. In the beginning, there was the word. And conservatives believed the word. But when the world around them changed and God neither smote the unbeliever nor struck down the evil Antichrist, conservatives eventually shrugged and went with the flow. After a while they began singing the same hymns to Baal as the liberals. They couldn’t very well go to jail for non-participation in the Social Security system and they discovered that the government checks always cashed – so why not go along? It was the only way they could get their money out.

In due course, conservatives found out along with the rest of society that they had been lied to and flimflammed by the pay-as-you-go status of Social Security. It was not a system of insurance, after all; the word “social” in the terms “social insurance” and “Social Security” should be taken to mean “not,” just as it does in terms like “social justice,” “social democracy” and “social responsibility.” By then, though, everybody was so thoroughly habituated to the system that it would have required something close to a revolution to change it. Something like what the colonists originally did when they revolted against the British and dumped tea in Boston harbor, for example.

When Williamson implies that conservatives are entirely comfortable with Social Security today, he is being disingenuous. (That either means “lacking in candor” or “naïve;” he is either lying to us or he is plain stupid.) In fact, conservatives (and just about everybody else) below the age of 50 no longer expect even to receive Social Security benefits – they expect the system to go bankrupt long before they collect. They are not comfortable with the system but resigned to it; there is a world of difference between the two. And considering that Williamson himself just published an article on “Generation Vexed” and its growing dissatisfaction with the Obama regime in the previous issue of NR, he cannot claim indifference to their electoral attitudes in this context.

But this attitude of resignation is wildly optimistic compared to the fiscal reality facing America and the rest of Western industrial society today. The welfare state is collapsing around our ears. Central bankers are in extremis; they are reduced to printing money to finance operations. The Eurozone staggers from crisis to crisis. Japan is now working on its third “lost decade.” Demography is a disaster; birth rates will not bail us out. Worse – they are falling like leaden raindrops, reducing the number of workers paying in per welfare-benefit recipient. The crisis is not in the far-off future but today – if the U.S. had to finance upcoming deficits at normal rates of interest rather than the “zero interest rates” of the last five years, the interest charges alone would eat up most of the federal budget. And the entitlement programs that Williamson views as sacred are now eating up most of that budget.

Williamson acts as if Social Security finance were a Starbucks menu. He treats longstanding conservative doctrine on Social Security as if it were excerpted from fundamentalist Scripture out of Inherit the Wind. But he is no Clarence Drummond; Social Security is exactly the Ponzi scheme that conservatives have always fulminated against. In fact, it is worse, because the Day of Judgment is arriving even sooner than prophesied.

True, it isn’t just Social Security – it’s also Medicare and Medicaid and the welfare system. (Welfare reform didn’t come close to reforming the whole system, just one of the six components of it.) The point is that we have passed the elective stage and have now entered the stage of imminent collapse. In that stage, monetary chaos and an uncertain fate for democracy await.

And what is Williamson’s reaction? When Americans protest, “I can be overdrawn; I still have checks,” Williamson nods, “Right you are.” But we’re not just overdrawn – we’re completely bankrupt.

Under these conditions, what are our choices? Suppose we remain in Obamaville. That will result in collapse. Suppose we go Williamson’s route, a route of picking and choosing a few pieces of low-hanging fruitful reform. That will also result in collapse.

We have nothing to lose and everything to gain by telling voters the truth and opting for revolutionary reform. If they reject us, we will be hung for offering a full-bodied sheep – limited government, free markets and freedom – rather than a bleating lamb of meekly pandering populism.

Popunomics

 

Williamson isn’t just selectively bad on economics – he has renounced economic logic entirely in favor of populist emotion. Take the minimum wage – Williamson’s shining example of popular Populism. The minimum wage is one of three or four most heavily researched measures in economics, having attracted empirical studies consistently since the late 1940s. Until the notorious Card-Krueger study in 1993, these found that the minimum wage adversely affected employment of low-skilled labor. These findings jibed with a priori theory, which predicted that a minimum wage would produce a surplus of labor (unemployment), increase the scope for discrimination by buyers of labor against sellers of labor, reduce the quality of labor and/or jobs, encourage businesses to offer fewer benefits and more part-time jobs and encourage businesses to substitute machinery and high-skilled labor for low-skilled labor. All these effects have been observed in conjunction with the minimum wage since its imposition. Card and Krueger offered no rebuttal to the eloquent testimony of the research record and were notably silent on the theory underpinning their own research result, which purported to find an increase in comparative employment in one state after an increase in the minimum wage. Both the validity of their data and the econometric soundness of their results were later challenged.

Having carefully chosen one of the most economically untenable of all Populist positions on which to “meet voters where they are,” Williamson next ups the ante. From the debased coin of the minimum wage, he turns to the fool’s gold of restrictionist anti-immigrationism. The late Richard Nadler painstakingly showed – and in NR to boot, in 2009’s “Great Immigration Shoot-Out” – that restrictionists were big and consistent electoral losers in Republican primaries and general elections. But Williamson is back at the same old stand, hawking “stronger border controls… mandatory use of E-verify… and like measures” because “voters are solidly on the conservatives’ side on this issue.”

Oh really? Just in time – net immigration has been roughly zero for the last few years. Market forces, not government quotas, control international migration; the quotas merely serve to criminalize violators. Immigration benefits America

on net balance, regardless of its legal dimension. Along with free trade and opposition to the minimum wage, place support of free international migration among the issues upon which economists strongly agree.

Wait a minute – Williamson has gone from supporting brain-dead economics because it is generally popular (the minimum wage) to supporting it because it is popular with NR’s constituency. Just as Buckley had to rescue the Right from the anti-Semitism of the American Mercury and the conspiratorial John Birch Society, we are now faced with the task of rehabilitating the right wing from the crank nativism and restrictionism that has asserted squatter’s rights at National Review. Calling Williamson’s version of expedience Populism gives ideology a bad name. The 19th-century Populism of Pitchfork Ben Tillman, et al, featured cheap money and fashionably bad economics but it was more consistent than Williamson’s proposal.

Borrowing the argot of the digital generation, Williamson is expounding not Populism but rather PLR – the “path of least resistance.” Put your finger to the wind and sense what we can get the voters to sign off on. See how many fundamental principles and how much government money we’ll have to sacrifice to win the next election. Williamson purports to be lecturing us on why Republicans fail – because they are too ideologically scrupulous, insisting on free markets, free trade, open borders, flexible prices, deregulation. But the encroachment of big government and the welfare state proceeded mostly unabated throughout the 20th century despite periods of Republican ascendancy. How could this have happened? Because Republicans were really heeding Williamson’s doctrine all along; PLR ruled, not ideological constancy. Goldwater never led the Republican Party, even when he won the nomination. Reagan was detested by the Party establishment and his philosophy was ditched the minute Air Force One lifted off the runway to return him to California. PLR was always the de facto rule of thumb – and forefinger, ring finger and all other digits. How else could a Party ostensibly supporting limited government have countenanced the transition to unlimited government?

Williamson treats the rise of the Tea Party as America’s version of China’s Cultural Revolution. Whew! We must cease all this senseless bloodletting and wild-eyed revolutionary fervor; return to our senses and settle for what we can get rather than striving for Utopia. Back to normalcy, back to pragmatism and compromise and half-a-loaf … well, maybe a quarter-loaf… or even a slice… hell, maybe even a few crumbs, just so its bread.

It is fitting that Keynesian economics has come home to roost in this time of Quantitative Easing and central-banking hegemony and liquidity everywhere with not a loan to drink. “In the long run, we are all dead” was Keynes’ most famous quip. Well, we can’t live in the short run forever. The procession of short runs eventually produces a long run. And the long run is here.

It’s time to pay up. The voters have given Republicans a gift – the chance to tell the truth and turn the ship around before we reach the falls. PLR is no longer sufficient. It’s time – no, it’s long past time to start doing all the things that Williamson says Republicans can’t do and mustn’t do.

The Anti-Economics Party of the Party of Sound Economics?

 

“The American public is in many ways conservative, but in many ways it is not, and its conservatism often is not the conservatism of Milton Friedman or Phil Gramm but that of somebody who fears the national debt and dreads bureaucracy but rather likes his Social Security check.” The Republican Party’s glory days of the post-World War II period came during the Great Moderation ushered in by the Reagan Presidency, beginning in late 1980 and continuing into the present millennium. This success and victory in the Cold War were the only departures from PLR. This period of prosperity was driven by an economic policy whose positive features were disinflation, sound money, low taxes and low inflation. This is a combination that Keynesian economics finds contradictory and now repudiates utterly. Williamson repudiates it, too, hence his explicit rejection of Milton Friedman and Phil Gramm as exponents of conservatism. (Once again, his use of Friedman, a libertarian rather than a conservative, is disingenuous.) He is still living in the past, the days when we could have our conservatism and our Social Security checks, too. Sorry, we have bigger problems now than how to buy votes from our own voter base to win the next election.

For years, Republicans have been able to win occasional elections the easy way, by adopting PLR. Those days are over. From now on, the Republicans will have to earn their money as a party of limited government by actually practicing the principles they profess. That is the bad news. But the good news is that they cannot lose by doing this. The very economics that Kevin Williamson looks down on tells us that.

Economics defines “cost” as the alternative foregone. If telling the truth will cause you to lose the election, you may well decide to lie; the cost of truth-telling will seem too high. But if winning the election and losing the election are reduced to equivalence by the consequences of economic collapse, then telling the truth suddenly becomes costly no longer. Now avoiding collapse becomes the only matter of consequence and the election outcome fades into insignificance.

Ironically, that is not only sound economics; it is also supremely pragmatic.