DRI-172 for week of 7-5-15: How and Why Did ObamaCare Become SCOTUSCare?

An Access Advertising EconBrief:

How and Why Did ObamaCare Become SCOTUSCare?

On June 25, 2015, the Supreme Court of the United States delivered its most consequential opinion in recent years in King v. Burwell. King was David King, one of various Plaintiffs opposing Sylvia Burwell, Secretary of Health, Education and Welfare. The case might more colloquially be called “ObamaCare II,” since it dealt with the second major attempt to overturn the Obama administration’s signature legislative achievement.

The Obama administration has been bragging about its success in attracting signups for the program. Not surprisingly, it fails to mention two facts that make this apparent victory Pyrrhic. First, most of the signups are people who lost their previous health insurance due to the law’s provisions, not people who lacked insurance to begin with. Second, a large chunk of enrollees are being subsidized by the federal government in the form of a tax credit for the amount of the insurance.

The point at issue in King v. Burwell is the legality of this subsidy. The original legislation provides for health-care exchanges established by state governments, and proponents have been quick to cite these provisions to pooh-pooh the contention that the Patient Protection and Affordable Care Act (PPACA) ushered in a federally-run, socialist system of health care. The specific language used by PPAACA in Section 1401 is that the IRS can provide tax credits for insurance purchased on “exchanges run by the State.” That phrase appears 14 times in Section 1401 and each time it clearly refers to state governments, not the federal government. But in actual practice, states have found it excruciatingly difficult to establish these exchanges and many states have refused to do so. Thus, people in those states have turned to the federal-government website for health insurance and have nevertheless received a tax credit under the IRS’s interpretation of statute 1401. That interpretation has come to light in various lawsuits heard by lower courts, some of which have ruled for plaintiffs and against attempts by the IRS and the Obama administration to award the tax credits.

Without the tax credits, many people on both sides of the political spectrum agree, PPACA will crash and burn. Not enough healthy people will sign up for the insurance to subsidize those with pre-existing medical conditions for whom PPACA is the only source of external funding for medical treatment.

To a figurative roll of drums, the Supreme Court of the United States (SCOTUS) released its opinion on June 25, 2015. It upheld the legality of the IRS interpretation in a 6-3 decision, finding for the government and the Obama administration for the second time. And for the second time, the opinion for the majority was written by Chief Justice John Roberts.

Roberts’ Rules of Constitutional Disorder

Given that Justice Roberts had previously written the opinion upholding the constitutionality of the law, his vote here cannot be considered a complete shock. As before, the shock was in the reasoning he used to reach his conclusion. In the first case (National Federation of Independent Businesses v. Sebelius, 2012), Roberts interpreted a key provision of the law in a way that its supporters had categorically and angrily rejected during the legislative debate prior to enactment and subsequently. He referred to the “individual mandate” that uninsured citizens must purchase health insurance as a tax. This rescued it from the otherwise untenable status of a coercive consumer directive – something not allowed under the Constitution.

Now Justice Roberts addressed the meaning of the phrase “established by the State.” He did not agree with one interpretation previously made by the government’s Solicitor General, that the term was an undefined term of art. He disdained to apply a precedent established by the Court in a previous case involving interpretation of law by administration agencies, the Chevron case. The precedent said that in cases where a phrase was ambiguous, a reasonable interpretation by the agency charged with administering the law would rule. In this case, though, Roberts claimed that since “the IRS…has no expertise in crafting health-insurance policy of this sort,” Congress could not possibly have intended to grant the agency this kind of discretion.

No, Roberts is prepared to believe that “established by the State” does not mean “established by the federal government,” all right. But he says that the Supreme Court cannot interpret the law this way because it will cause the law to fail to achieve its intended purpose. So, the Court must treat the wording as ambiguous and interpret it in such a way as to advance the goals intended by Congress and the administration. Hence, his decision for defendant and against plaintiffs.

In other words, he rejected the ability of the IRS to interpret the meaning of the phrase “established by the State” because of that agency’s lack of health-care-policy expertise, but is sufficiently confident of his own expertise in that area to interpret its meaning himself; it is his assessment of the market consequences that drives his decision to uphold the tax credits.

Roberts’ opinion prompted one of the most scathing, incredulous dissents in the history of the Court, by Justice Antonin Scalia. “This case requires us to decide whether someone who buys insurance on an exchange established by the Secretary gets tax credits,” begins Scalia. “You would think the answer would be obvious – so obvious that there would hardly be a need for the Supreme Court to hear a case about it… Under all the usual rules of interpretation… the government should lose this case. But normal rules of interpretation seem always to yield to the overriding principle of the present Court – the Affordable Care Act must be saved.”

The reader can sense Scalia’s mounting indignation and disbelief. “The Court interprets [Section 1401] to award tax credits on both federal and state exchanges. It accepts that the most natural sense of the phrase ‘an exchange established by the State’ is an exchange established by a state. (Understatement, thy name is an opinion on the Affordable Care Act!) Yet the opinion continues, with no semblance of shame, that ‘it is also possible that the phrase refers to all exchanges.’ (Impossible possibility, thy name is an opinion on the Affordable Care Act!)”

“Perhaps sensing the dismal failure of its efforts to show that ‘established by the State’ means ‘established by the State and the federal government,’ the Court tries to palm off the pertinent statutory phrase as ‘inartful drafting.’ The Court, however, has no free-floating power to rescue Congress from their drafting errors.” In other words, Justice Roberts has rewritten the law to suit himself.

To reinforce his conclusion, Scalia concludes with “…the Court forgets that ours is a government of laws and not of men. That means we are governed by the terms of our laws and not by the unenacted will of our lawmakers. If Congress enacted into law something different from what it intended, then it should amend to law to conform to its intent. In the meantime, Congress has no roving license …to disregard clear language on the view that … ‘Congress must have intended’ something broader.”

“Rather than rewriting the law under the pretense of interpreting it, the Court should have left it to Congress to decide what to do… [the] Court’s two cases on the law will be remembered through the years. And the cases will publish the discouraging truth that the Supreme Court favors some laws over others and is prepared to do whatever it takes to uphold and assist its favorites… We should start calling this law SCOTUSCare.”

Jonathan Adler of the much-respected and quoted law blog Volokh Conspiracy put it this way: “The umpire has decided that it’s okay to pinch-hit to ensure that the right team wins.”

And indeed, what most stands out about Roberts’ opinion is its contravention of ordinary constitutional thought. It is not the product of a mind that began at square one and worked its way methodically to a logical conclusion. The reader senses a reversal of procedure; the Chief Justice started out with a desired conclusion and worked backwards to figure out how to justify reaching it. Justice Scalia says as much in his dissent. But Scalia does not tell us why Roberts is behaving in this manner.

If we are honest with ourselves, we must admit that we do not know why Roberts is saying what he is saying. Beyond question, it is arbitrary and indefensible. Certainly it is inconsistent with his past decisions. There are various reasons why a man might do this.

One obvious motivation might be that Roberts is being blackmailed by political supporters of the PPACA, within or outside of the Obama administration. Since blackmail is not only a crime but also a distasteful allegation to make, nobody will advance it without concrete supporting evidence – not only evidence against the blackmailer but also an indication of his or her ammunition. The opposite side of the blackmail coin is bribery. Once again, nobody will allege this publicly without concrete evidence, such as letters, tapes, e-mails, bank account or bank-transfer information. These possibilities deserve mention because they lie at the head of a short list of motives for betrayal of deeply held principles.

Since nobody has come forward with evidence of malfeasance – or is likely to – suppose we disregard that category of possibility. What else could explain Roberts’ actions? (Note the plural; this is the second time he has sustained PPACA at the cost of his own integrity.)

Lord Acton Revisited

To explain John Roberts’ actions, we must develop a model of political economy. That requires a short side trip into the realm of political philosophy.

Lord Acton’s famous maxim is: “Power corrupts; absolute power corrupts absolutely.” We are used to thinking of it in the context of a dictatorship or of an individual or institution temporarily or unjustly wielding power. But it is highly applicable within the context of today’s welfare-state democracies.

All of the Western industrialized nations have evolved into what F. A. Hayek called “absolute democracies.” They are democratic because popular vote determines the composition of representative governments. But they are absolute in scope and degree because the administrative agencies staffing those governments are answerable to no voter. And increasingly the executive, legislative and judicial branches of the governments wield powers that are virtually unlimited. In practical effect, voters vote on which party will wield nominal executive control over the agencies and dominate the legislature. Instead of a single dictator, voters elect a government body with revolving and rotating dictatorial powers.

As the power of government has grown, the power at stake in elections has grown commensurately. This explains the burgeoning amounts of money spent on elections. It also explains the growing rancor between opposing parties, since ordinary citizens perceive the loss of electoral dominance to be subjugation akin to living under a dictatorship. But instead of viewing this phenomenon from the perspective of John Q. Public, view it from within the brain of a policymaker or decisionmaker.

For example, suppose you are a completely fictional Chairman of a completely hypothetical Federal Reserve Board. We will call you “Bernanke.” During a long period of absurdly low interest rates, a huge speculative boom has produced unprecedented levels of real-estate investment by banks and near-banks. After stoutly insisting for years on the benign nature of this activity, you suddenly perceive the likelihood that this speculative boom will go bust and some indeterminate number of these financial institutions will become insolvent. What do you do? 

Actually, the question is really more “What do you say?” The actions of the Federal Reserve in regulating banks, including those threatened with or undergoing insolvency, are theoretically set down on paper, not conjured up extemporaneously by the Fed Chairman every time a crisis looms. These days, though, the duties of a Fed Chairman involve verbal reassurance and massage as much as policy implementation. Placing those duties in their proper light requires that our side trip be interrupted with a historical flashback.

Let us cast our minds back to 1929 and the onset of the Great Depression in the United States. At that time, virtually nobody foresaw the coming of the Depression – nobody in authority, that is. For many decades afterwards, the conventional narrative was that President Herbert Hoover adopted a laissez faire economic policy, stubbornly waiting for the economy to recover rather than quickly ramping up government spending in response to the collapse of the private sector. Hoover’s name became synonymous with government passivity in the face of adversity. Makeshift shanties and villages of the homeless and dispossessed became known as “Hoovervilles.”

It took many years to dispel this myth. The first truthteller was economist Murray Rothbard in his 1962 book America’s Great Depression, who pointed out that Hoover had spent his entire term in a frenzy of activism. Far from remaining a pillar of fiscal rectitude, Hoover had presided over federal deficit spending so large that his successor, Democrat Franklin Delano Roosevelt, campaigned on a platform of balancing the federal-government budget. Hoover sternly warned corporate executives not to lower wages and officially adopted an official stance in favor of inflation.

Professional economists ignored Rothbard’s book in droves, as did reviewers throughout the mass media. Apparently the fact that Hoover’s policies failed to achieve their intended effects persuaded everybody that he couldn’t have actually followed the policies he did – since his actual policies were the very policies recommended by mainstream economists to counteract the effects of recession and Depression and were largely indistinguishable in kind, if not in degree, from those followed later by Roosevelt.

The anathematization of Herbert Hoover drover Hoover himself to distraction. The former President lived another thirty years, to age ninety, stoutly maintaining his innocence of the crime of insensitivity to the misery of the poor and unemployed. Prior to his presidency, Hoover had built reputation as one of the great humanitarians of the 20th century by deploying his engineering and organizational skills in the cause of disaster relief across the globe. The trashing of his reputation as President is one of history’s towering ironies. As it happened, his economic policies were disastrous, but not because he didn’t care about the people. His failure was ignorance of economics – the same sin committed by his critics.

Worse than the effects of his policies, though, was the effect his demonization has had on subsequent policymakers. We do not remember the name of the captain of the California, the ship that lay anchored within sight of the Titanic but failed to answer distress calls and go to the rescue. But the name of Hoover is still synonymous with inaction and defeat. In politics, the unforgivable sin became not to act in the face of any crisis, regardless of the consequences.

Today, unlike in Hoover’s day, the Chairman of the Federal Reserve Board is the quarterback of economic policy. This is so despite the Fed’s ambiguous status as a quasi-government body, owned by its member banks with a leader appointed by the President. Returning to our hypothetical, we ponder the dilemma faced by the Chairman, “Bernanke.”

Bernanke only directly controls monetary policy and bank regulation. But he receives information about every aspect of the U.S. economy in order to formulate Fed policy. The Fed also issues forecasts and recommendations for fiscal and regulatory policies. Even though the Federal Reserve is nominally independent of politics and from the Treasury department of the federal government, the Fed’s policies affect and are affected by government policies.

It might be tempting to assume that Fed Chairmen know what is going to happen in the economic future. But there is no reason to believe that is true. All we need do is examine their past statements to disabuse ourselves of that notion. Perhaps the popping of the speculative bubble that Bernanke now anticipates will produce an economic recession. Perhaps it will even topple the U.S. banking system like a row of dominoes and produce another Great Depression, a la 1929. But we cannot assume that either. The fact that we had one (1) Great Depression is no guarantee that we will have another one. After all, we have had 36 other recessions that did not turn into Great Depressions. There is nothing like a general consensus on what caused the Depression of the 1920s and 30s. (The reader is invited to peruse the many volumes written by historians, economic and non-, on the subject.) About the only point of agreement among commentators is that a large number of things went wrong more or less simultaneously and all of them contributed in varying degrees to the magnitude of the Depression.

Of course, a good case might be made that it doesn’t matter whether Fed Chairman can foresee a coming Great Depression or not. Until recently, one of the few things that united contemporary commentators was their conviction that another Great Depression was impossible. The safeguards put in place in response to the first one had foreclosed that possibility. First, “automatic stabilizers” would cause government spending to rise in response to any downturn in private-sector spending, thereby heading off any cumulative downward movement in investment and consumption in response to failures in the banking sector. Second, the Federal Reserve could and would act quickly in response to bank failures to prevent the resulting reverse-multiplier effect on the money supply, thereby heading off that threat at the pass. Third, bank regulations were modified and tightened to prevent failures from occurring or restrict them to isolated cases.

Yet despite everything written above, we can predict confidently that our fictional “Bernanke” would respond to a hypothetical crisis exactly as the real Ben Bernanke did respond to the crisis he faced and later described in the book he wrote about it. The actual and predicted responses are the same: Scare the daylights out of the public by predicting an imminent Depression of cataclysmic proportions and calling for massive government spending and regulation to counteract it. Of course, the real-life Bernanke claimed that he and Treasury Secretary Henry O’Neill correctly foresaw the economic future and were heroically calling for preventive measures before it was too late. But the logic we have carefully developed suggests otherwise.

Nobody – not Federal Reserve Chairmen or Treasury Secretaries or California psychics – can foresee Great Depressions. Predicting a recession is only possible if the cyclical process underlying it is correctly understood, and there is no generally accepted theory of the business cycle. No, Bernanke and O’Neill were not protecting America with their warning; they were protecting themselves. They didn’t know that a Great Depression was in the works – but they did know that they would be blamed for anything bad that did happen to the economy. Their only way of insuring against that outcome – of buying insurance against the loss of their jobs, their professional reputations and the possibility of historical “Hooverization” – was to scream for the biggest possible government action as soon as possible. 

Ben Bernanke had been blasé about the effects of ultra-low interest rates; he had pooh-poohed the possibility that the housing boom was a bubble that would burst like a sonic boom with reverberations that would flatten the economy. Suddenly he was confronted with a possibility that threatened to make him look like a fool. Was he icy cool, detached, above all personal considerations? Thinking only about banking regulations, national-income multipliers and the money supply? Or was he thinking the same thought that would occur to any normal human being in his place: “Oh, my God, my name will go down in history as the Herbert Hoover of Fed chairmen”?

Since the reasoning he claims as his inspiration is so obviously bogus, it is logical to classify his motives as personal rather than professional. He was protecting himself, not saving the country. And that brings us to the case of Chief Justice John Roberts.

Chief Justice John Roberts: Selfless, Self-Interested or Self-Preservationist?

For centuries, economists have identified self-interest as the driving force behind human behavior. This has exasperated and even angered outside observers, who have mistaken self-interest for greed or money-obsession. It is neither. Rather, it merely recognizes that the structure of the human mind gives each of us a comparative advantage in the promotion of our own welfare above that of others. Because I know more about me than you do, I can make myself happier than you can; because you know more about you than I do, you can make yourself happier than I can. And by cooperating to share our knowledge with each other, we can make each other happier through trade than we could be if we acted in isolation – but that cooperation must preserve the principle of self-interest in order to operate efficiently.

Strangely, economists long assumed that the same people who function well under the guidance of self-interest throw that principle to the winds when they take up the mantle of government. Government officials and representatives, according to traditional economics textbooks, become selfless instead of self-interested when they take office. Selflessness demands that they put the public welfare ahead of any personal considerations. And just what is the “public welfare,” exactly? Textbooks avoided grappling with this murky question by hiding behind notions like a “social welfare function” or a “community indifference curve.” These are examples of what the late F. A. Hayek called “the pretense of knowledge.”

Beginning in the 1950s, the “public choice” school of economics and political science was founded by James Buchanan and Gordon Tullock. This school of thought treated people in government just like people outside of government. It assumed that politicians, government bureaucrats and agency employees were trying to maximize their utility and operating under the principle of self-interest. Because the incentives they faced were radically different than those faced by those in the private sector, outcomes within government differed radically from those outside of government – usually for the worse.

If we apply this reasoning to members of the Supreme Court, we are confronted by a special kind of self-interest exercised by people in a unique position of power and authority. Members of the Court have climbed their career ladder to the top; in law, there are no higher rungs. This has special economic significance.

When economists speak of “competition” among input-suppliers, we normally speak of people competing with others doing the same job for promotion, raises and advancement. None of these are possible in this context. What about more elevated kinds of recognition? Well, there is certainly scope for that, but only for the best of the best. On the current court, positive recognition goes to those who write notable opinions. Only Judge Scalia has the special talent necessary to stand out as a legal scholar for the ages. In this sense, Judge Scalia is “competing” with other judges in a self-interested way when he writes his decisions, but he is not competing with his fellow judges. He is competing with the great judges of history – John Marshall, Oliver Wendell Holmes, Louis Brandeis, and Learned Hand – against whom his work is measured. Otherwise, a judge can stand out from the herd by providing the deciding or “swing” vote in close decisions. In other words, he can become politically popular or unpopular with groups that agree or disagree with his vote. Usually, that results in transitory notoriety.

But in historic cases, there is the possibility that it might lead to “Hooverization.”

The bigger government gets, the more power it wields. More government power leads to more disagreement about its role, which leads to more demand to arbitration by the Supreme Court. This puts the Court in the position of deciding the legality of enactments that claim to do great things for people while putting their freedoms and livelihoods in jeopardy. Any judge who casts a deciding vote against such a measure will go down in history as “the man who shot down” the Great Bailout/the Great Health Care/the Great Stimulus/the Great Reproductive Choice, ad infinitum.

Almost all Supreme Court justices have little to gain but a lot to lose from opposing a measure that promotes government power. They have little to gain because they cannot advance further or make more money and they do not compete with J. Marshall, Holmes, Brandeis or Hand. They have a lot to lose because they fear being anathematized by history, snubbed by colleagues, picketed or assassinated in the present day, and seeing their children brutalized by classmates or the news media. True, they might get satisfaction from adhering to the Constitution and their personal conception of justice – if they are sheltered under the umbrella of another justice’s opinion or they can fly under the radar of media scrutiny in a relatively low-profile case.

Let us attach a name to the status occupied by most Supreme Court justices and to the spirit that animates them. It is neither self-interest nor selflessness in their purest forms; we shall call it self-preservation. They want to preserve the exalted status they enjoy and they are not willing to risk it; they are willing to obey the Constitution, observe the law and speak the truth but only if and when they can preserve their position by doing so. When they are threatened, their principles and convictions suddenly go out the window and they will say and do whatever it takes to preserve what they perceive as their “self.” That “self” is the collection of real income, perks, immunities and prestige that go with the status of Supreme Court Justice.

Supreme Court Justice John Roberts is an example of the model of self-preservation. In both of the ObamaCare decisions, his opinions for the majority completely abdicated his previous conservative positions. They plumbed new depths of logical absurdity – legal absurdity in the first decision and semantic absurdity in the second one. Yet one day after the release of King v. Burwell, Justice Roberts dissented in the Obergefell case by chiding the majority for “converting personal preferences into constitutional law” and disregarding clear meaning of language in the laws being considered. In other words, he condemned precisely those sins he had himself committed the previous day in his majority opinion in King v. Burwell.

For decades, conservatives have watched in amazement, scratching their heads and wracking their brains as ostensibly conservative justices appointed by Republican presidents unexpectedly betrayed their principles when the chips were down, in high-profile cases. The economic model developed here lays out a systematic explanation for those previously inexplicable defections. David Souter, Anthony Kennedy, John Paul Stevens and Sandra Day O’Connor were the precursors to John Roberts. These were not random cases. They were the systematic workings of the self-preservationist principle in action.

DRI-162 for week of 2-1-15: It Happens Every Season

An Access Advertising EconBrief:

It Happens Every Season

The Super Bowl has come and gone. And with it have come stories on the economic benefits accruing to the host city – or, in this case, cities. The refrain is always the same. The opportunity to host the Super Bowl is the municipal equivalent of winning the Powerball lottery. Thousands – no, hundreds of thousands of people – descend on the host city. They focus the world’s attention upon it. They “put in on the map.” They spend money, and that money rockets and ricochets and rebounds throughout the local economy with ballistic force, conferring benefits left, right and center. We cannot help but wonder – why don’t we replicate this benefit process by bringing people and businesses to town? Why wait in vain on a Super Bowl lottery when we can instead run our own economic benefit lottery by offering businesses incentives to relocate, thereby redistributing economic benefits in our favor?

It happens every winter. In fact, publicity about economic development incentives (EDIs) is always in season, for they operate year-round. Nowadays almost every state in the union has a government bureau with “economic development” on its nameplate and a toolkit bulging with subsidies and credits.

For years, the news media has mindlessly repeated this stylized picture of EDIs, as if they were all repeating the same talking points. Both the logic of economics and empirical reality vary starkly from this portrait.

EDIs In a Nutshell

The term “EDIs” is shorthand for a variety of devices intended to make it more attractive for particular businesses to relocate to and/or operate in a particular geographic area. The devices involve either taxes or subsidies. Sometimes a business will receive an outright grant of money to relocate, much as an individual gets a relocation bonus from his or her company. Sometimes a business will receive a tax credit as an inducement to relocate. The tax credit may be of specified duration or indefinite. Sometimes the business may receive tax abatement – property tax abatements are especially favored. Again, this may be time-limited or indefinite. Sometimes the tax or subsidy is implicit rather than explicit. Sometimes businesses will even receive production subsidies in excise form; that is, a per-unit subsidy on output produced.

Various forms of implicit or in-kind benefit are also offered. These include grants of land for production facilities and exemption from obligations such as payment for municipal services.

These do not exhaust the EDI possibilities but the list is representative and suggestive.

A Short, Sour History of EDIs

Proponents of EDIs indignantly reject the charge that their ideas are new. On the contrary, government favors to business trace back to the early years of the republic, they insist.

It is certainly true that the early decades of the 19th century saw a boom – today, we would call it a “bubble” – in the building of canals, primarily as transportation media. The Erie Canal was the most famous of these. Although the canals were privately owned, they were heavily subsidized and supported by government. Are we surprised, then, that the canal boom went bust, sinking most of its investors like sash weights? Railroads are traditionally given credit for spearheading U.S. economic development in the 19th century, and the various special favors they won from state and local governments are legendary. They include subsidies and extravagant rights of way on either side of their trackage. But economist Robert Fogel won a Nobel Prize for his downward revision of the importance of railroads to the economic growth of 19th-century America, so there is less there than meets the mainstream historical eye.

The modern emphasis on EDIs can be traced back to the state industrial finance boards of the 1950s. These became more active in the late 1960s and 70s when the national economy went stagnant with simultaneous inflation and recession. Like European national governments today, state and local governments were trying to steal businesses from each other. They lacked central banks and the power to print money, so they couldn’t devalue their currencies as European nations are now doing serially. Instead, they used selective economic benefits as their tools for redistributing businesses in their favor. And, like Europe today, they found that these methods only work as intended when employed by the few. When everybody does it simultaneously, they cancel each other out. One state steals Business A from another, but loses Business B. How do we know whether that state has gained or lost on net balance? We don’t, but in the aggregate nobody wins because businesses are simply being reallocated – and not for the better. Of course, we haven’t yet stopped to consider whether the state even gained from wooing Business A in the first place.

We can look back on many celebrated startups and relocations that were midwived by EDIs. In Tennessee, Nissan got EDI subsidies for relocating to the state in 1980. Later, GM built its famous Saturn plant there. In both cases, the big selling point was the large number of jobs ostensibly created by the project. We can get some idea of the escalation in the EDI bidding sweepstakes by comparing the price-tag per job over time. The Nissan subsidies cost roughly $11,000 per job created. At this price, it is hard to envision an economic bonanza for the host community, but compare that to the $168,000 per job created that went to Mercedes Benz for relocating to Alabama in 1993. In 1978, Volkswagen promised 20,000 jobs for the $70 million it got for moving to Pennsylvania, but ended up delivering only about 6,000 jobs before closing the plant within a decade.

There is every reason to believe that these results were the rule, not the exception. Economists have identified the phenomenon known as the “winner’s curse,” in which winning bidders often find that they had to bid such a high price to win that their benefits were eaten up. Economists have long objected to the government practice of setting quotas on imported goods because the quota harms domestic consumers more than it benefits domestic producers. Moreover, governments customarily give import licenses to politically favored businesses. Economists plead: Why not open up the licenses to competitive bid? That would force would-be beneficiaries of the artificial shortage created by the quota to eat up their monopoly profits in the price they pay for the import license. Then taxpayers would benefit from the revenue, making up for what they lose in consumption of the import good. This same principle prevents cities from benefitting when they “bid” against other cities to lure firms by offering them subsidies and tax credits – they have to offer the firm such lucrative benefits to win the competition against numerous other cities that any benefits provided by the relocating business are eaten up by the subsidy price the city pays.

The Economics of Business Location

The general public probably envisions an economic textbook with a chapter on “economic development” and tips on how to lure businesses and which types of business are the most beneficial, as well as tables of “multiplier” benefits for each one.

Not! The theory of economic development is silent on this subject. The only applicable economic logic is imported from the theory of international trade. The case of import quotas provided on example. The specter of European nations futilely trying to outdo each other in trashing the value of their own currencies is another; international economists use the acerbic term “beggar thy neighbor” to characterize the motivation behind this strategy. It applies equally to states and cities that poach on businesses in neighboring jurisdictions, trying to lure them across state or municipal boundaries where they can pay local taxes and provide prestigious photo opportunities for politicians.

What about the Keynesian theory of the “multiplier,” in which government spending has a multiple effect on income and employment? Even if it were true – and all major Keynesian criticisms of neoclassical theory have been overturned – it would apply only under conditions of widespread unemployment. It apply only to national governments that can control policies for the entire nation and have the power to control and alter the supply of money and credit and rates of interest. Thus, the principle would be completely inapplicable to state and local governments anyway.

Economists believe that there is an economically efficient location for a business. Typically, this will be the place where it can obtain its inputs at lowest cost. Alternatively, it might be where it can ship its output to consumers the cheapest. If EDIs cause a business to locate away from this best location by falsely offsetting the natural advantages of another location, they are harming the consumers of the goods and services produced by the businesses. Why? The business is incurring higher costs by operating in the wrong location, and these higher costs must be compensated by a higher price paid by consumers than would otherwise be true. That higher price combines with the subsidies paid by taxpayers in the host community to constitute the price paid for violating the dictates of economic efficiency.

Why do economists obsess over efficiency, anyway? The study of economics accepts as a fact that human beings strive for happiness. In order to attain our goals, we must make the best use of our limited resources. That requires optimal consumer choice and cost minimization by producers. When government – which is a shorthand term for the actions of politicians, bureaucrats and lower-level employees acting in their own interests – muck up the signaling function of market prices, this distorts the choices made by consumers and producers. Efficiency is reduced. And this effect is far from trivial. A previous EconBrief discussed an estimate that federal-government regulations since 1949 have reduced the rate of economic growth in the U.S. by a factor of three, implying that average incomes would be roughly $125,000 higher today in their absence.

EDIs are a separate issue from regulation. They are more recent in origin but growing in importance. In 1995, the Minneapolis Federal Reserve published a study by economists Melvin Burstein and Arthur Rolnick, entitled “Congress Should End the Economic War Between the States.” At about the same time, the United Nations published its own study dealing with a similar phenomenon at the international level.

Borrowing once again from the theory of international trade, these studies view production in light of the principle of comparative advantage. Countries (or states, or regions, or cities, or neighborhoods, or individual persons) specialize in producing goods or services that they produce at lower opportunity cost than competitors. Freely fluctuating market prices will reflect these opportunity costs, which represent the monetary value of alternative production foregone in the creation of the comparative-advantage good or service. Free trade between countries (or states, regions, cities, neighborhoods or persons) allows everybody to enjoy the consumption gains of this optimal pattern of production.

Burstein, Rolnick, the U.N., et al felt that politicians should not be allowed to muck up free markets for their own benefit and said so. That debate has continued ever since in policy circles.

The Umpires Strike Back: EDI Proponents Respond 

Responses of EDI proponents have taken two forms. The first is anecdotal. They cite cases of particular successful EDI regimes or projects. The cited case is usually a city like Indianapolis, IN, which enjoyed a run of success in luring businesses and a concurrent spurt of economic growth. A less typical case is Kansas City, KS, which languished for several decades in prolonged decay with a deserted, crumbling downtown area and crime-ridden government housing projects and saw its tax base steadily disintegrate. The city subsidized a NASCAR-operated racing facility on the western edge of its county, miles away from its downtown base. It also subsidized a gleaming shopping and entertainment district slightly inward of the racetrack. Both NASCAR and the shopping district have benefitted from these moves, and politicians have claimed credit for revitalizing the city by their efforts. A recent Wall Street Journal column described the policy has having revamped “the city and its reputation.”

The second argument consists of a few studies that claim to find a statistical link between the level of spending on EDIs and the rate of job growth in states. Specifically, these studies report “statistically significant” relationships between those two variables. This link is cited as justification for EDIs.

Both these arguments are extremely weak, not to say specious. It is widely recognized today that most investors are foolish to actively manage their own stock portfolios; e.g., to pick stocks in order to “beat the market” by earning a rate of return superior to the average rate available on (say) an index fund such as the S&P 500. Does that mean that it is impossible to beat the market? No; if millions of investors try, a few will succeed due to random chance or luck. Another few will succeed due to expertise denied to the masses.

Analogous reasoning applies to the anecdotal argument made by EDI proponents. A few cities are always enjoying economic growth for reasons having nothing to do with EDIs – demographic or geographic reasons, for example. With large numbers of cities “competing” via EDIs, a few will succeed due to random chance. But this does not make, or even bolster, the case for EDIs. Indeed, the use of the term “competition” in this context is really false, because cities do not compete with cities – only concrete entities such as businesses or individuals can compete with each other. It is really the politicians that are competing with each other. And this form of competition, quite unlike the beneficial form of competition in free markets, is inherently harmful.

This sophisticated rebuttal is overly generous to the anecdotal arguments for EDIs. Even if we assume that the EDIs produce a successful project – that is, if we assume that Saturn succeeds at its Tennessee plant or NASCAR thrives in Kansas City, KS – it by no means follows that one company’s gains translate into areawide gains in real income. A study by the late Richard Nadler found no gains at all in local Gross Domestic Product for Wyandotte County, in which Kansas City, Kansas resides, years after NASCAR had arrived. The logic behind this result, reviewed later, is straightforward.

The studies claiming to support EDIs lean heavily on the prestige of statistical significance. Alas, this concept is both misunderstood and misapplied even by policy experts. Its meaning is binary rather than quantitative. When a relationship is found “statistically significant,” that means that it is unlikely to be completely random or chance but it says nothing about the quantitative strength or importance of the relationship. This caveat is especially germane when discussing EDIs, because all the other evidence tells us that EDIs are trivial in their substantive effect on business location decisions.

For decades, intensive surveys have indicated that business executives select the optimal location for their business – then gladly take whatever EDIs are offered. In other words, the EDI is usually irrelevant to the actual location decision. But executives seal their lips when it comes to admitting this fact openly, because their interests lie in fanning the flames of the Economic War Between the States. That war keeps EDIs in place and subsidizes their moves and investments.

Thus, a statistical correlation between EDIs and job growth is not a surprise. But no case has been made that EDIs are the prime causal mover in differential job growth or economic growth among states, regions or cities.

Perhaps the best practical index of the demerits of EDIs would be the economic decline of big-spending blue states in America. These states have been high-tax, high-spending states that heavily utilized EDIs to reward politically favored businesses. This tactic may have improved the fortunes of those clients, but it has certainly not raised the living standards of the populations of those states.

If Not EDIs, What? 

It is reasonable to ask: If EDIs do not govern the wealth of states or cities, what does? Rather than offer selective inducements to businesses, governments would do better to offer across-the-board inducements via lower tax rates to businesses and consumers. Studies have consistently linked higher rates of economic growth with lower taxes on both businesses and individuals throughout the U.S.

Superficially, this strikes some people as counterintuitive. The word “selective” seems attractive; it suggests picking and choosing the best and weeding out the worst. Why isn’t this better than blindly lower taxes on everybody?

In fact, it is much worse. Government bureaucrats or consultants are not experts in choosing which businesses will succeed or fail. Actually, there are very few “experts” at doing that; the best ones attain multi-millionaire or billionaire status and would never waste their time working for government. Governments fail miserably at that job. Better to allow the experts at stock-picking to pick stocks and relegate government to doing the very, very few things that it can and should do.

States and municipalities typically operate with budget constraints. They cannot create money as national governments can and are very limited in their ability to borrow money. So when they selectively give money to a few businesses with subsidies or tax credits, the remaining businesses or individuals have to pay for that in higher taxes. If lower taxes for a few are good for that few, then it follows that higher taxes for the rest must be bad for the rest. And this means that even if the subsidies promote success for the favored business, they will reduce the success of the other businesses and reduce the real incomes of consumers. In other words, the “economic development” promoted by government’s “subsidy hand” will be taken away by government’s “tax hand.” What the government giveth, the government taketh away. Oops.

Lower taxes for everybody work entirely differently. They change the incentives faced at the margin, causing people to work, save and invest more. The increased work effort causes more goods and services to be produced. The increased saving makes more financial resources available for investment by businesses. The increasing investment increases the amount of capital available for labor to work with, which makes labor more productive. This increased productivity causes employers to bid up wages, increasing workers’ real incomes.

Lest this process sound like a free lunch, it must be noted that unless the increased incomes are self-financing – that is, unless the increased incomes provide equivalent tax revenue at the lower rates – government will have to reduce spending in order to fulfill the conditions for stability. Since modern government is wildly inflated – heavily bureaucratized, over-administered and over-staffed as well as obese in size – this should not present a theoretical problem. In practice, though, the willingness to achieve this tradeoff is what has defined success and failure in economic development at the state and local level.

Markets Succeed. Governments Fail

EDIs fail because they are an attempt by government to improve on the workings of free markets. Free markets have only advantages while governments have only disadvantages. Free markets operate according to voluntary choice; governments coerce and compel. Voluntary choice allows people to adjust and fine-tune arrangements to suit their own happiness; compulsion makes no allowance for personal preference and individual happiness. Since human happiness is the ultimate goal, it is no wonder that markets succeed and governments fail.

Free markets convey vast amounts of information in the most economical way possible, via the price system. Since people cannot make optimal choices without possessing relevant information, it is no wonder that markets work. Governments suppress, alter and distort prices, thereby corrupting the informational content of prices. Indeed, the inherent purpose of EDIs is exactly to distort the information and incentives faced by particular businesses relative to the rest. It is no wonder, then, that governments fail.

Prices coordinate the activities of people in neighborhoods, cities, regions, states and countries. In order for coordination to occur, people should face the same prices, differing only by the costs of transporting goods from place to place. Free markets produce this condition. Governments deliberately interfere with this condition; EDIs are a classic case of this interference. No wonder that governments, and EDIs, fail.

DRI-183 for week of 12-7-14: Immigration and Economic Principles

An Access Advertising EconBrief:

Immigration and Economic Principles

1776 marked the founding of a new nation and a new intellectual discipline. The Declaration of Independence announced the creation of a United States of America and proclaimed the individual’s right to life, liberty and the pursuit of happiness. The Founders – specifically, the Declaration’s author – relied heavily on Adam Smith for the intellectual underpinnings of their document.

Smith’s Wealth of Nations, published in 1776, identified the purpose of all economic activity as consumption. Today, economists view consumption as the source of happiness. But in 1776, that notion was radical indeed. The reigning philosophy of government was mercantilism, which taught that government should accumulate gold (or specie generally) as a store of wealth by promoting the export of goods and discouraging imports. The resulting net inflow of gold would enrich the nation. Of course, even mercantilists knew that food was necessary to human survival – they coexisted with a primitive school of economists known as the physiocrats, who believed that land was the only source of economic value and agriculture the only productive economic activity.

Smith’s work began the tradition of modern economics by overturning both his fallacious predecessors. The mercantilists were wrong on two counts: they were wrong to stress exports at the expense of imports and wrong to imply that a “favorable” export surplus was a stable outcome. Imports are the beneficial part of international trade because they enhance consumption; exports are the cost of international trade because they connote a sacrifice of goods sent abroad to obtain imported goods for consumption. Even if an export surplus were to prevail temporarily, it could not persist. Building on the work of his contemporary David Hume, who developed the famous “price-specie flow model,” Smith pointed out that the net inflow of money (either gold or silver) resulting from the export surplus would raise domestic prices, causing exports to become less desirable to domestic residents and foreign imports to become more desirable.

Smith also pointed out that human labor created goods for consumption not only by working the land but in factories as well. His discussion of a pin factory is still studied today as a pioneering analysis of productivity.

Thus did the modern study of economics and international trade begin life together. International economics has stayed in the spotlight ever since. Currently immigration occupies center stage; President Obama has seized the political initiative from the Republicans by proposing to temporarily suspend enforcement of immigration laws against large numbers of undocumented immigrants.

Unfortunately, the accidental historical precedence given to international economics has contributed to the misapprehension that this field of economics is sui generis. The truth is that international economics is subordinate to general economic theory. The truths of basic economics apply internationally as well as intranationally. In fact, most international issues would be clearer if they were reconfigured in intranational form. This applies just as strongly to immigration as it does to every other aspect of international economic theory.

Migration and Marginal Productivity

When students take their first economics course, the principle of marginal productivity is one of the first lessons they learn. But first things first. In the beginning, there is scarcity – and it is pervasive. The “economic problem” is the outgrowth of scarce resources and infinite wants. There is no end to the number of good things that the human imagination can dream up. Unfortunately, virtually all of those good things are created using “inputs” – human labor, natural resources and produced goods. Inputs are available in limited quantities; they are “scarce.” Consequently, the good things – “output” – are also scarce. The science of economics has devised a pure logic of choice enabling us to make the best use of scarce inputs in producing scarce output to satisfy unlimited human wants.

The principle of marginal productivity deals with input allocation. It says to allocate inputs so that all marginal productivities are equal. That sounds mind-blowingly simple, and it is. In practice, what it boils down to is that business managers – indeed, all of us, if you want to view each individual as their own manufacturer of happiness – are on the lookout for situations in which some inputs are highly productive. For example, we are all looking for jobs in which our own labor specialties are highly valued. If we are teachers, we keep a weather eye peeled for highly paid teaching vacancies. Movie actors flock to auditions for desirable parts. Computer programmers look for programming jobs that offer the highest salaries.

Input prices, such as the wages paid for human labor, reflect the productivity of the input at the margin. The more productive the input, the greater the demand by managers and the higher the price they are willing to pay it. The more people supply the input, the more sellers compete to offer input services and the lower the price will be, all other things equal.

Input supply and demand determine the market prices for all inputs, from human labor to land to capital goods. The principle of marginal productivity governs the productive allocation of inputs – it tells us whether it makes sense to use more or less of each input in producing the various outputs. It also tells us whether it is efficient to shift inputs between different outputs by using more labor to produce one good and less labor to produce another one.

When we talk about changing input amounts and shifting inputs, we are talking not just about one particular place and one particular point in time. We are also talking about different places at the same time and about different points in time as well. That is, it may also make sense to shift labor from one place to a different place. The same is true of natural resources and capital goods. We also shift input use from today into the future and vice-versa. Differences in input prices and productivities are the keys to these shifts, too.

Migration is one of the most fundamental examples of all economic adaptive response. Differences in input price and productivity between geographic regions create an opportunity for gain by input reallocation. Let us assume that low-skilled human labor is more productive in Kansas than in Missouri. This will tend to make wages for unskilled labor in Kansas above those in Missouri. The most practical response to this discrepancy is for unskilled labor to migrate from Missouri to Kansas. This will tend to lower wages of unskilled labor in Kansas and raise them in Missouri, thereby reducing the wage discrepancy in the two states. The migration will also tend to reduce the marginal productivity discrepancy in the two states by lowering marginal productivity for unskilled labor in Kansas and raising it in Missouri.

Migrations of this kind happen throughout the U.S. on a daily basis. Nobody thinks much about them, let alone takes measures to prevent them. But if I were to replace the word “Kansas” with the word “Texas” and the word “Missouri” with the word “Mexico,” the whole passage would suddenly become controversial and subject to debate. While intranational migration has occurred throughout American history without attracting unfavorable comment, international immigration has been heatedly debated since at least the 1920s.

Our discussion is the tipoff to the falsity of most of the debate. There is little economic difference between intranational migration and international immigration. The mere fact that the migratory movement crosses an international boundary does not invalidate it. It does not rob it of its economic value. Of course, it does change its superficial character. But that is all; the change is superficial only. The gain from immigration is the same as that from migration – more efficient use of scarce resources. It is one of the most basic, bedrock principles in economics.

Opportunity Cost and Comparative Advantage

The very first subject undertaken in the very first course in economics taken by college students is the subject of economic cost. What is special about economic cost, as opposed to (say) accounting cost? Economists view cost in a special way. Because all of us live our lives exchanging goods for money and vice versa, we are completely habituated to denominating prices and costs in monetary units. And that’s good, because it gives us a common denominator for valuing thousands of things whose heterogeneity would otherwise make comparative valuation a nightmare. Can you imagine a life in which we had to trade goods and services directly for other goods and services, without a medium of exchange to intermediate each transaction?

The thought sends shivers up and down your spine. But economists conceptually do just that when they explain microeconomic theory or, as it is sometimes called, price theory. That theory treats money prices only in relative or real form. A relative price reveals the implied sacrifice of one good involved in the purchase of one unit of another. For example, if the Px = $10 and the Py = $5, then the relative price of X (its real price) is the ratio of X’s price to Y’s price. That is, the purchase of one unit of good X implies the sacrifice of 2Y. While the money price of X is $10, its real price is 2Y. In a two-good world, this relative price is the opportunity cost of consuming X.

Why do economists go to all the trouble of jolting students out of their comfortable familiarity with monetary valuation and into the retrograde world of direct barter exchange? Not because barter trade has much practical application, certainly, although it does arise occasionally in special contexts. No, the purpose is expressed in an aphorism by the great 19th-century English economist John Stuart Mill, who characterized money as a veil that obscures but does not completely hide the underlying reality. That reality is that indirect monetary exchange substitutes for direct barter exchange, and this accounts for the concept of a relative or “real” price. When we pay money for goods we are really trading alternative consumption – specifically, the highest-valued alternative consumption purchase equal in monetary amount. This is a tipoff to the fact that the real value we derive from goods and services is the happiness they bring; money is merely a placeholder (or unit of account) that facilitates comparison and exchange.

We penetrate the monetary veil because it’s the only way to learn the underlying truths about opportunity cost and comparative advantage. In 1815, an English stockbroker named David Ricardo assumed Adam Smith’s mantel as the world’s leading economist by developing a revolutionary model of international trade. Ricardo’s model stipulated two hypothetical countries. He could just as well have called them “A” and “B,” but with an eye to the headlines of his day he called them “England” and “Portugal.” He specified two produced goods, wine and cloth, both produced using human labor. (He treated all labor hours as equivalent.) No chauvinist, he assumed that Portugal was capable of producing both goods using fewer labor hours than was England. He began by assuming a condition of autarky; that is, no international trade between the two countries. He also stipulated (arbitrary) price and production levels for both goods in each country.

Up to this point, Ricardo had done nothing remarkable by contemporary standards. But now he hit his audience with a thunderbolt. He asserted that opening up the two countries to international trade would benefit both of them by allowing them to consume more than each country could produce and consume in the absence of international trade.

First, Ricardo pointed out that the true economic cost of production for wine and cloth in each country was not the (unspecified) monetary cost of employing labor. It was not even the amount of labor hours used to produce each good. (Up to this point, classical economists such as Adam Smith had favored a ‘labor theory of value”; the value of any good was determined by the amount of labor required to produce it.) No, the true economic cost was the opportunity cost of production – except that Ricardo called it the “comparative cost.” Based on the labor coefficients of each good in each country, Ricardo calculated the opportunity cost of one unit of wine and cloth production in both England and Portugal.

And lo! The results shocked the world. In fact, they still do. Even though Portugal appeared to be the more efficient producer of both goods, it had a lower opportunity-cost of production for one good only – wine. Portugal was the more efficient wine producer because its opportunity-cost of production was lower than England’s.

The implications of this finding were – and are – world-shaking. England should specialize in its most efficient good, cloth, by producing more cloth than it did under autarky. Portugal should produce more wine than it did under autarky. (Actually, Ricardo’s model prescribed complete production specialization by each country, an artifact of the super-simplified assumptions built into his model.) Then the two countries should trade internationally – England should export cloth to Portugal in exchange for wine produced by Portugal, thus allowing both countries to consume both goods. The terms of trade should represent a ratio of prices intermediate to that existing under autarky.

Sure enough – Ricardo’s model generated a result in which both England and Portugal achieved consumption levels for wine and cloth that exceeded the possibilities open to them under autarky. At the time, this seemed to the general public like a magic trick. To some people today, it still does. Some people have never learned it and others refuse to believe what they learned. Then there are those who insist that Ricardo’s conclusions apply only in textbooks and not in reality, for a host of reasons.

There are two key insights behind Ricardo’s theory. The first is his notion of comparative cost. Modern economists have broken this term in two. They have modified the term “comparative cost” to “opportunity cost” in order to stress its alternative element. To bring out the comparative or relative element, they have devised the term “comparative advantage” to encompass situations like England’s in Ricardo’s theory. Despite being less productively efficient in both goods in the absolute sense, England nevertheless had a comparative production advantage in cloth because its opportunity-cost of production was lower.

But merely identifying the locus of comparative advantage is purely academic unless we act on it by specializing in production, which creates the extra output that allows us all to consume more by engaging in international trade. Thus, specialization and trade is the second key element in Ricardo’s theory.

Thus far in this section, we have said nothing whatever about immigration. But immigration is the proverbial elephant in the room. For thousands of years, civilization has been following this principle of specialization and trade according to comparative advantage. That is what we do when we grow up, go to school, get a job, work and earn money – then use the money to support our lifestyles. We did it for millennia without realizing what we were doing or why, like the character in Moliere’s play who had been speaking prose all his life without realizing it.

David Ricardo developed his theory in terms of international trade for the same reason that Adam Smith began the modern study of economics by focusing on international trade: that was where the action was in terms of money, public interest and government activity.

It is only very recently that economic textbooks have tentatively begun to point out that the same insight they have been flogging for centuries while teaching the theory of international trade is valid in intranational trade. In fact, this is exactly the insight that has accounted for human productivity since the days when human beings left their hunter-gatherer bands and formed individual families residing in villages, towns and cities.

And how does immigration fit into this implicit theory of everyday production, you ask? The answer would be too mundane to need mention were it not for the fact that so many people ferociously resist it even now. In order for specialization and trade according to comparative advantage and trade to work, people have to specialize in their comparative-advantage line of production, just as England and Portugal had to specialize in Ricardo’s model for those countries to realize the gains from international trade.

And they can’t very well specialize when they aren’t allowed to work at what they do best, can they? Yet Mexicans who are five times more productive working in Texas than in Mexico are nevertheless barred from working legally in the U.S.! The basic fundamental principles of markets are designed to achieve maximum productivity by assigning all of us to our highest-valued uses, where our marginal productivity is highest. And U.S. immigration laws allow people to move across international borders only according to their national origin, which has as much to do with their marginal productivity as the color of their eyes does.

Is this any way to run a railroad? Is it any wonder that the greatest economists, like Milton Friedman, constantly stress fundamental principles rather than niggling about esoteric mathematics or econometric models?

Cost Minimization

The standard microeconomic theory taught in college courses is divided into three subject areas: the theory of consumer demand, the theory of cost and production and the theory of marginal input productivity. The theory of cost and production is sometimes called “the theory of the firm” because its usual application is to business firms. That theory explains the optimal logic behind the production and sale of output to consumers by businesses.

A key principle of this theory is cost minimization. The theory of the firm assumes that the firm’s goal is profit maximization. (“Profit” might be viewed in instantaneous terms as the residual of total revenue from the sale of output minus all costs of production, including the opportunity cost of capital and/or the owner’s labor time, or it might be viewed intertemporally as the discounted present value of expected future net revenue.) The firm’s manager(s) will choose the rate of output that maximizes profit and will select the combination of inputs that minimizes the cost of producing that rate of output.

It goes without saying that the firm will purchase any quantity of (homogeneous) input at the lowest possible price. Alternatively, the firm will purchase the highest quality of any heterogeneous input at a given price.

Well, it’s supposed to go without saying, anyway. But when it comes to immigration, suddenly it’s a crime even to say it out loud. When employers want to hire foreign workers because they can pay lower wages than they are paying to domestic workers for the same work, that turns out to be illegal, or immoral, or fattening or otherwise verboten. But if this is not only allowable but even downright de rigeur in an intranational context, why should it be unthinkable in an international context?

Of course, the answer is that it shouldn’t. It is just as beneficial to minimize costs by hiring cheap foreign labor as it is to hire cheap domestic labor. It is just as beneficial to hire cheap labor from any source as it is to purchase cheap raw materials or cheap land or cheap machinery.

Did a reader respond by inquiring “beneficial for whom?” Well, the answer is “beneficial in the first instance for business owners, but beneficial in the long run for everybody, because lower costs ultimately are reflected in lower prices and everybody is a consumer – including all the owners of inputs who are paid the lowest prices.” We can’t always guarantee that every single person benefits from every efficient economic activity – such as immigration – more than they suffer from it. But that has to be true for most people – otherwise, how did civilization advance as it has over the millennia? How did the U.S. become the U.S.?

What About “Fiscal Cost” or “Net Job Creation” or …

We now know that the concept of free and open migration – whether inside the boundaries of a nation or across national boundaries – is fundamental to the efficiency of markets. It is inextricably interwoven into the fabric of our everyday lives, so much so that we take it completely for granted. Thus, when we protest against immigration by foreigners into our country we are engaging in the most blatant contradiction.

How many times have readers of this EconBrief previously seen this issue framed in these clear, straightforward terms? Chances are, the answer is: Zero. Instead, we are presented with a variety of alternative arguments against immigration.

For example, a fairly recent anti-immigration tactic is the “fiscal cost” scam. We are urged to restrict immigration – or ban it altogether – because it is unaffordable. Supposedly, immigrants cost the government more in various forms of transfer payments (welfare, Social Security, emergency medical and more) than they generate in receipts (various tax payments). Thus, on net balance they flunk the criterion of “fiscal cost.”

The non-economist might suppose that this is a key test of economic worthiness, perhaps tabulated quarterly or annually on every American by a government bureau and kept on file. What a laugh. Fiscal cost is a term made up by anti-immigrationists in order to discredit immigrants. The easiest way to appreciate this is to recall that half of the American population now pays no income tax. It has recently come to light that most Americans stack up even worse by the fiscal cost standard than do immigrants! This is hardly surprising; immigrants are not eligible for most forms of welfare and tend to be younger than the average American, so get less medical treatment than average as well. They are more entrepreneurial and tend to work harder, so are more productive as well. This follows because, far from being the tired, dispossessed, tempest-tossed, ragged poor of the Emma Lazarus poem, immigrants tend to have more initiative and smarts than the average person. They have to be better than average in order to contemplate traveling to another land, speaking a foreign language, coping with another culture and starting another life. The anti-immigration stereotype of a lazy bum who somehow runs the border gauntlet in order to live off the fat of the U.S. welfare state is a particularly egregious myth.

Calculating fiscal cost is no easy task. Why would a researcher engage in laborious calculations to produce estimates of aggregate effects whose meaning is so obscure? Actually, complexity and obscurity are what make concepts like fiscal cost attractive to anti-immigrationists. The last thing they want to do is join a debate on fundamental economic principles, where the issues are so straightforward and clear-cut. Why start a fight they are destined to lose? Instead, they want to pick a fight they can pretend to win because the public will not know how to judge it. We are so used to hearing economic issues outlined in complicated terms, so accustomed to watching with glazed eyes and hearing without comprehending that we fall back on our emotions rather than our reason.

Now the anti-immigrationists have us where they want us. The immigration debate takes us back to the days of pre-history, when mankind first began to break up the ancient bands and form families. Outsiders were looked upon with suspicion. Trade and specialization were forbidden; economic activity was geared to benefit the band, not the individual household. Today, the nation state has taken the place of the ancient hunter-gatherer band as the extended family. The state dispenses welfare benefits and rules over us with an iron fist. It wants to control economic activity for its benefit and the benefit of its acolytes. It inflames those ancient, instinctive antagonisms toward outsiders that still reside within the citizenry.

We can revert to the savage, instinctive atavism of mankind’s primitive past. Or we can embrace the reasoned productivity of freedom and free markets. The choice should be easy, for the record of history shows that markets have lifted mankind out of the muck and mire to the prosperity of today.

The last thing we should do is judge immigration by perusing the latest pseudo-study by a think-tank dedicating to obfuscating clear thought. The simplest, clearest, most basic of all economic principles tell us that immigration is vital to freedom and prosperity.

DRI-303 for week of 8-17-14: When Fighting Fire With Fire Just Makes a Bigger Blaze

An Access Advertising EconBrief:

When Fighting Fire With Fire Just Makes a Bigger Blaze

Fans of the classic television series Get Smart will recall the snappy comeback of secret agent Maxwell Smart to a malefactor indignant at the prospect of detention: “You’re not going to arrest me on this flimsy evidence, are you?” “No,” Smart replied confidently, “I’ve got some more flimsy evidence.”

The quality of empirical debate over public policy has deteriorated to this level. Just as politicians are now compelled to act virtually any time something goes wrong, no matter what it is or how slim the likelihood of successful intervention, no exchange of opposing views is complete without quantitative citation. As soon as one side unveils its numbers, the other side must respond with numbers of its own – no matter how far-fetched or badly compiled. It is a Newtonian law of equal and opposite polemical reaction.

As a result, public discourse is now debased to the point of decadence. The long-running debate over the minimum wage has plumbed these depths of intellectual degradation. In the August 21 Wall Street Journal op-ed, “Do Higher Minimum Wages Create More Jobs?” authors Liya Palagashvili and Rachel Mace probe for the bottom. It is as if they have rewritten Mel Brooks’ script: “You don’t expect me to believe this flimsy evidence, do you?” “Well, my flimsy evidence is a lot better than your flimsy evidence!”

The Left Wing’s Flimsy Evidence

Op-ed authors Palagashvili and Mace (hereinafter, P&M) correctly relate that the left-wing Center for Economic and Policy Research (CEPR) released a report purporting to demonstrate the success of state-level minimum-wage increases in increasing relative employment growth among states. The report was released in June, 2014, and used data compiled by the federal Bureau of Labor Statistics. It examined 13 states that increased their individual minimum wage (as distinct from the federal minimum wage) that month and compared them to the other 37 states whose minimum wage did not rise. The report claimed that the average overall employment growth among the 13 states exceeded that of the 37 states for the five-month comparison period.

The Obama administration appropriated these conclusions with the alacrity of a police department confiscating drug-dealer assets. As P&M note, there was the little matter of “why [the] firms [would] hire more workers when the government raises the cost of hiring workers?” The straight-faced answer was that “hiking the minimum wage raises the incomes of poor workers, causing them to spend more. This additional spending, in turn, is so great that firms hire even more workers.” No less a personage than Barack Obama himself got into this act. “That [worker spending] gets churned back into the economy. And the whole economy does better, including the businesses.”

A priori, this “theory” of economic development is so ludicrous that it would qualify for an evening comedy skit at an American Economic Association convention. “Ludicrous” means ludicrous a priori; its theoretical underpinnings are so completely lacking that nobody would take it seriously enough to investigate. Well, nobody should – these days, no premise is too ridiculous if it can backstop a political point. Our Economist-in-Chief in the White House needs to bolster his standing with the public and shore up two key constituencies. One of those is obvious – the poor, downtrodden low-skilled workers who allegedly benefit from the minimum wage. The other is hidden – the higher-skilled workers, particularly union members, who substitute for the low-skilled workers laid off after the minimum-wage increase.

The “spending rescue” thesis is the culmination of two decades’ worth of left-wing attempts to promote the minimum wage as the salvation of the poor. This crusade began in the early 1990s, when economists David Card and Alan Krueger published a now-legendary study purporting to show that imposition of a minimum wage in New Jersey increased employment there relative to Pennsylvania. The defects of this study have since become almost as legendary as its conclusions. It utilized phone surveys to gather data – a technique heretofore shunned within the profession but thereupon praised as innovative and groundbreaking. But when other economists attempted to confirm the results using payroll data, this change instead reversed the results of Card and Krueger. The study’s econometrics has been panned by expert econometricians. Card and Krueger themselves were unable to supply a theoretical rationale for their result. Ordinarily, this would have been a fatal defect, but the policy implications of the study’s results were so delicious to the left wing that Card and Krueger were lionized and have gone on to professional fame and fortune. The only valid theory that would support their result does not comport with the reality of labor markets.

Why is the left so desperate to validate such a worthless policy measure? Their anxiety derives from the unique qualities of the minimum wage: it hides the benefits to their treasured constituency (unions), masquerades as a godsend to the poor while actually screwing them, and visibly appears to screw the rich (business owners, all of whom are assumed “rich” by definition) while actually doing so only in the short run. What a deal! The “optics” of the minimum wage are ideal for the left; that is, its visible or apparent effects are politically beneficial to them. Of course, its actual effects are harmful to everybody except the special-interest monopolists who comprise the left wing’s leading constituency these days, but that is jake with the left. Their ultimate goal is power – increasing real incomes for special interests are only a means to that end.

The Traditional Economic View of the Minimum Wage

Until Card and Krueger came along, the minimum wage vied with tariffs and quotas on foreign goods for the title of “most unpopular policy measure” among professional economists. Nearly a half-century of empirical examination reaffirmed the verdict of a priori theory: minimum wages redistribute jobs and real income from some poor and low-skilled workers to other poor and low-skilled workers by reducing employment, closing some businesses and temporarily reducing profits earned by businesses utilizing low-skilled labor.

These results are the outgrowth of the impact felt by business upon imposition of the minimum wage. Formally, it acts like a tax on the employment of low-skilled labor, which is the kind of labor directly affected by the minimum wage. That tax has three kinds of impact: a substitution effect, an output effect and a profit effect. (The first two of these are analogous to the substitution and income effect of a price change in consumer demand theory.) The substitution effect causes firms to employ less low-skilled labor and more of other inputs, including the higher-skilled labor previously mentioned as well as machinery that substitutes for labor. The output effect causes businesses employing low-skilled labor to produce less output, thereby employing fewer inputs of all kinds including labor. The profit effect reduces the profits earned by firms employing low-skilled labor. This third effect is only temporary, because the exit of some firms from the industry due to insolvency or better opportunities elsewhere will eventually raise the rate of return back to its previous, competitive level. That is why so-called rich business owners are adversely affected only transitorily by the minimum wage. The “permanent” gains go to workers who retain their jobs at the higher minimum wage. The “permanent” losses are suffered by workers who lose their jobs, some of whom may leave the labor force altogether. This phenomenon of exit from the labor force is by now well-known to most Americans; it has reached its highest level in over thirty years.

This is a formidable a priori case against the minimum wage. Economists never doubted that the minimum wage adversely affected employment of poor and low-skilled workers; they only doubted the degree to which this was true. Empirical studies of this issue began in the late 1940s, conducted by luminaries like future Nobel laureate George Stigler. Over the succeeding decades, economists used formal statistics to enforce the conditions necessary for a valid empirical examination of the issue.

One common defense of the minimum wage made by newspaper editorialists and readers over the years is that “the minimum wage went up but the U.S. unemployment rate did not go up; in fact, it went down, which proves that the minimum wage does not adversely affect employment.” This argument is invalid for several reasons. First, the minimum wage only affects employment within firms and industries that hire low-skilled labor. That does not begin to comprise the entire U.S. economy. Second, even within those industries directly affected by the minimum wage, the overall effects on employment of labor are equivocal. The substitution effect causes employment of less low-skilled labor but more higher-skilled labor, while the output and profit effects cause less employment of all inputs. It is not unusual at all to find that a liberal administration increases both the minimum wage and the money supply, with the latter causing temporary gains in income and employment that can swamp job losses associated with the minimum wage. This is not only ironic – since it harms the very people purportedly highest among the concerns of the left – but fully compatible with a condition in which the minimum wage causes job losses while the overall unemployment rate falls.

To avoid being fooled by effects outside the scope of the minimum wage, economists confined their studies to low-skilled workers and corrected their statistical methods to correct for trends and outside influences. That has been the traditional focus of econometrics, to compensate for the ways in which social sciences differ from the laboratory experiments common to the physical sciences.

Now, though, traditional econometrics has taken a back seat to raw political desire. And this corrupting influence has infected both sides of the political spectrum.

The Right Wing Retaliates With Its Own Flimsy Evidence

P&M disdain virtually all of the history and a priori theory cited above. They have their own flimsy evidence to present against the minimum wage. Their case is purely quantitative; clearly they believe in fighting fire with fire. They begin by finding the portion of the labor force comprised of low-skilled labor, which is roughly 2%, insufficient to generate the high-powered spending necessary to outweigh the minimum wage’s disincentives.

While no doubt true, this leaves room for counterargument by the left. Minimum-wage proponents will respond by accusing P&M of “overlooking” the greater propensity to spend by the poorest families. This is a feeble rebuttal, but the average person won’t know the difference and will probably rule the point a draw at best.

P&M then make a stronger point – that the logic of proponents’ case should mean that bigger minimum-wage boosts should have bigger effects on employment. In fact, the opposite was the case in January-May, 2014. The three substantial minimum-wage increases took place in Connecticut, New Jersey and New York, the three falling between 5% and 14%. Yet these three states had the worst job growth of the 13 increase-states, an average of 0.3% compared to the 1.28% average increase in the other 10 states. “Indeed, job growth was worse in each of these three states than it was, on average, in the 37 states that did not raise their minimum wage at all,” P&M report. And “in New Jersey, the state that hiked [the] minimum wage the most – to $8.25 an hour from $7.25 – employment actually fell by about 0.56%.” In the state with the largest job growth, WashingtonState’s 2.1%, the minimum wage went up by a whopping 13 cents per hour, or almost $24 per month for a full-time employee.

If P&M had rested content with this demonstration, they could have escaped criticism. Up to this point, they were merely using the left’s own evidence against it without accepting its methods. They were showing that the left’s argument wasn’t consistent even in its own terms, albeit without demonstrating how hopelessly confused those terms really were.

But P&M couldn’t stand prosperity. To a roll of drums, they unwrapped the crown jewel in their collection. “We conducted a statistical analysis of the Bureau of Labor Statistics’ data called a two-sample “t” test for comparing two means. We found, for this time period, no difference in the job-growth trend in the states that raised their minimum wages from states that did not. In other words, the correlation cited as debunking the economic case against the minimum wage is not statistically significant.”

Ta-daaaaaaa!!! Too bad there are no bows taken in print media; P&M would surely rate a round of applause in a run-of-the-mill graduate school economics seminar for their performance. It is surely no coincidence that “Ms. Mace studies economics at GeorgeMasonUniversity” while Ms. Palagashvili is a law-school fellow at NYU. Alas, they have displayed academia at its worst.

That is not to say that P&M flubbed their econometric dubs by conventional standards. We don’t know because we can’t see their results and have only their word as to their findings. But taking their comments at face value, it seems that they followed what have become standard econometric procedures. The t statistic is the standard one for small-sample tests of statistical significance. A comparison of sample means is a basic econometric procedure. Almost certainly, they assumed the standard “null hypothesis” of no difference between average job growth in the 13 states as compared to job growth in the 37 states. In this context, “no difference” does not mean that the two averages are exactly the same, which they obviously aren’t. It means that the degree of correspondence between the two is not sufficient as to enable us to be confident that the correspondence was not due to random chance. And just what does “confident” mean? The standard meaning for it is that we must be at least 90% certain. Lacking that degree of confidence, we enter a finding of “statistically insignificant” – which means that the minimum-wage increase did not “cause” the increases in job growth.

It is overwhelmingly likely that the readers of this op-ed – who undoubtedly make up a sample of Americans that is far more intelligent than any randomly chosen sample – fall into two categories: those who have no idea what P&M’s “statistical significance” paragraph meant and those who think they know but are wrong. Those who correctly understand it probably represent a statistically invisible sliver of its readership. And a majority of economists and statisticians are excluded from that sliver.

P&M thought that they were “one up” on the minimum-wage proponents at CEPR because they (P&M) were using the tool of statistical significance as it has been used for decades in academia and government. That statement would be correct only if the word “misusing” were substituted for “using” in two places. That is why they were fighting fire with fire – they were responding to CEPR’s misuse of numbers with their own misuse of statistical inference. Their mistakes were just fancier than CEPR’s, that’s all.

The Flaws of Statistical Significance

Various authors have expounded the flaws of statistical significance as developed by the late statistician Sir Ronald Fisher. The most comprehensive treatment is probably that of Deirdre McCloskey and Stephen Ziliak, The Cult of Statistical Significance: How the Standard Error Costs Us Jobs, Justice and Lives. For our purposes, it is sufficient to summarize how one of the two groups referred to above views the notion of statistical significance and compare it with the truth.

Ask readers of the Wall Street Journal op-ed to explain the meaning of P&M’s statistical significance paragraph in layman’s terms. Those who think they know the answer will probably say something like the following: “Well, it means that the effect of an increase in the minimum wage on overall job growth is insignificant, the opposite of significant. That means it is “too small to matter.” It’s so small we can’t be confident that something else might not be causing what we’re seeing in job growth.” That’s an intuitively appealing explication for at least two reasons. First, it incorporates the familiar meaning of the words “significant” and “insignificant.” Second, it incorporates the kind of answer we are looking for when we do empirical research on issues like this. Typically, we want “how big” or “how much” kinds of answers rather than “yes or no” types of answers.

Unfortunately, the concept of statistical significance is not what most people think it is. Its findings do not convey any quantitative sense of how big an effect is or how much influence one variable (such as an increase in the minimum wage) has on another (such as state-level growth in employment). Rather, it is a binary, “yes-no” type of concept. It registers the likelihood that the influence of one variable on another is random, as compared to systematic or non-random. Because the variables involved are invariably derived from sample data, it can be viewed as a verdict on the representativeness of a chosen sample.

This is useful information, to be sure. But it is not the most useful information we could wish to obtain. And that is a crying shame because the obsession with statistical significance has pretty much overshadowed everything else in empirical research in the social sciences and even in much of the physical sciences today. This has reached such epidemic proportions that McCloskey, a leading economic historian and econometrician, declares that most statistical work in economics done over the last thirty years is useless and must be done over. That is tantamount to saying that we might as well junk the leading academic journals published during that interval.

Fighting Fire With Fire

The proper reaction to P&M’s reaction to the CEPR study and the left-wing minimum-wage ballyhoo is a polite yawn and a “So what?” This should be followed by a trip to the woodshed and back to the drawing board for P&M, where they would be schooled in proper econometric practice. Alternatively, they can do what true free-market economists have done while their colleagues were practicing pretend-Science: spend the time honing their understanding of concepts like the time-structure of production and capital theory. That will better inform their grasp of reality than the most esoteric econometric model.

Fighting fire with fire can work in specialized cases like oil-well fires. But in today’s debates over economic theory and policy, fighting fire with fire does not extinguish the original fire. It does not even provide intellectual illumination. It merely makes the blaze bigger.

DRI-304 for week of 3-2-14: Subjugating Florists: Power, Freedom and the Rule of Law

An Access Advertising EconBrief:

Subjugating Florists: Power, Freedom and the Rule of Law

A momentous struggle for human freedom is playing out in a mundane setting. Two people in Washington state are planning to wed. They want their florist, Arlene’s Flowers and Gifts, to supply flowers for the wedding. The owner, Barronelle Stutzman, refuses the job. The couple wants her to be compelled by law to provide service to them.

Even without knowing that particular facts distinguish this situation, we might suspect it. In this case, the couple consists of two homosexual men, Robert Ingersoll and Curt Freed. Ms. Stutzman’s refusal stems from an unwillingness to participate in – and thus implicitly sanction – a ceremony of which she disapproves on religious grounds.

The points at issue are two: First, does existing law forbid Ms. Stutzman’s refusal on the grounds that it is an illegal “discrimination” against the couple? Second, is that interpretation the proper one, regardless of its legality?

The first point is a matter for lawyers. (Washington’s Attorney General has filed suit against Ms. Stutzman.) The second point is a matter for all of us. On it may hinge the survival of freedom in the United States of America.

The Facts of the Case

The prospective married couple, Messrs. Ingersoll and Freed, has granted numerous interviews to publicize their side of the case. To the Christian Broadcasting Network (CBN), they described themselves as “loyal customers for a decade” of Arlene’s.

“It [Stutzman’s refusal] really hurt because it was somebody I knew,” Ingersoll confided. “We stayed awake all night Saturday. It was eating at our souls.”

For her part, Ms. Stutzman declared that “you have to make a stand somewhere in your life on what you believe….” The unspoken implication was that she had faced repeated challenges to her convictions, culminating in this decision to stand fast. “In America, the government is supposed to protect freedom, not… intimidate citizens into acting contrary to their faith convictions.”

The attitude displayed by major media outlets reflects the Zeitgeist, which decrees: Ms. Stutzman is guilty of illegal discrimination on grounds of sexual orientation. It is significant that this verdict crosses political boundaries. On the Sunday morning discussion program Face the Nation, longtime conservative columnist and commentator George Will claimed that “public-accommodations law” had long ago “settled” the relevant legal point regarding the requirement of a business owner to provide service to all comers once doors have been opened to the public at large. But Mr. Will nonetheless expressed dissatisfaction with the apparent victory of the homosexual couple over the florist. “They [homosexuals in general] have been winning…this makes them look like bad winners.” Mr. Will seemed to suggest that the couple should forego their legal right and let Ms. Stutzman off the hook as a matter of good manners.

Legal, Yes; Proper, No

The fact that the subjugation of the florist is legal does not make it right. For decades, the Zeitgeist has been growing ever more totalitarian. Today, the United States of America approaches a form of authoritarian polity called an absolute democracy. In an absolute monarchy, one person rules. In an absolute democracy, the government is democratically elected but it holds absolute power over the citizens.

The inherent definition of freedom is the absence of external constraint. In this case, that would imply that Messrs. Ingersoll and Freed would be free to engage or refuse the services of Ms. Stutzman and Ms. Stutzman would be free to provide or refuse service to Messrs. Ingersoll and Freed – on any basis whatsoever. That is what freedom means. A concise way of describing the operation of the Rule of Law would be that all (adult) citizens enjoy freedom of contract.

But in our current unfree country, Messrs. Ingersoll and Freed are free to patronize Arlene’s or not but Ms. Stutzman is not free. She is required to serve Messrs. Ingersoll and Freed, like it or not. The couple’s sexual orientation has earned them the status of a privileged class. They have the privilege of compelling service. This is a privilege enjoyed by a comparative few.

George Will and company may pontificate about settled law, but the truth is that refusals of service happen daily in American business. Businesses often refuse other businesses as a courtesy, typically as an acknowledgement of their own shortcomings or lack of specialized knowledge or expertise. Sometimes a business will frankly admit that a would-be customer falls outside their target customer class. This sort of refusal rarely, if ever, leads to recriminations. After all, who really wants to pay for a product or service unwillingly supplied? The only exception comes when the customer falls within one of the government-protected categories covered by the anti-discrimination laws. Then the fear of litigation, financial and criminal penalties and adverse publicity kicks in.

This may be the clearest sign that the Rule of Law no longer prevails in America. The Rule of Law does not mean scrupulous adherence to statutory law. It means the absence of privilege. In America today, privilege is alive and growing like a cancer. In the past, we associated the term with wealth and social position. That is no longer true. Now it connotes special treatment by government.

The Role of Competition Under the Rule of Law

Under the Rule of Law, Messrs. Ingersoll and Freed would not be able to compel Ms. Stutzman to provide flowers to their wedding. But this would not leave them without resource. The Rule of Law supports the existence of free competitive markets. The couple could simply call up another florist. True, they would be denied the service of their longtime acquaintance and supplier. But nobody is entitled to a lifetime guarantee of the best of everything. What if Ms. Stutzman was ill on their wedding day, or called out of town, or struck down by a beer truck? What if she went bankrupt or retired? The Rule of Law simply protects a free, competitive market from which Messrs. Ingersoll and Freed can pick and choose a florist.

That is not the only benefit the couple get from the Rule of Law and competition. In a competitive market, any seller who refuses service to a willing buyer must pay a penalty or cost in the form of foregone revenue. In strict, formal theory, a competitive market produces an equilibrium result in which the amount of output produced at the equilibrium price is exactly equal to the ex ante amount desired by consumers. A seller who turns away a buyer is throwing money down the drain. This is not something sellers will do lightly. Anybody who doubts this has never run a business and met a payroll. Thus, free competitive markets offer strong disincentives to discrimination.

Of course, that does not mean that businesses will never refuse a customer; the instant case proves that. But refusals of conscience like the one made by Ms. Stutzman will be comparatively rare, because it will be unusual for the owner to value the moral issue more than the revenue foregone.

The existence of competition under the Rule of Law is the safeguard that makes freedom and democracy possible. Without it, we would have to fear the tyranny of the majority over minorities. With it, we can safely rely on markets to protect the rights and welfare of minorities.

The Rule of Law and Limited Government

Free choice by both buyers and sellers is not the enemy of minority rights. The real danger to minorities is government itself – the very government that is today advertised as the champion of minorities.

After the Civil War, newly freed and enfranchised blacks entered the free economy in the South. They began to compete with unskilled and skilled white labor. This competition was successful, both because blacks were willing to work for lower wages and because some blacks had mastered valuable skills while slaves. For example, professional baseball originated in the 1860s and increased steadily in popularity; blacks participated in this embryonic period.

White laborers resented this labor-market competition. In order to artificially increase the wages of their members, labor unions had to restrict the supply of labor. Denying union membership to blacks was a common means of catering to member desires while furthering wage objectives. But the competition provided by blacks was difficult to suppress because employers had a clear incentive to hire low-wage labor that was also productive and skillful. Businesses had a strong monetary incentive not to refuse service to blacks because the money offered by blacks was just as green as anybody else’s money.

The solution found by the anti-black forces was the so-called “Jim Crow” laws. These forbade the hiring of blacks on equal terms and denied blacks equal rights to public accommodations and service. In effect, the Jim Crow laws cartelized labor and product markets in a way that would not otherwise have occurred. Governments also handed out special privileges to labor unions that enabled them to compel membership and deny it at will. Historically, labor unions excluded blacks from membership for the bulk of the 20th century. Blacks were banned from organized baseball and most other professional sports until the 1940s, when sports became the first wedge driven into the Jim Crow laws.

The apartheid law passed in southern Africa in the early 20th century also arose in order to thwart successful competition offered by white labor by black labor. Left alone, competitive labor markets were enabling black South Africans to enjoy rising wages and employment. South African labor unions agitated for government protection against black workers. The result was the “pass laws” or “color bar” or apartheid system, not unlike the Jim Crow laws prevailing in America. Once again, the purpose was to cartelize labor markets in order to erect barriers to competition offered to white labor by black workers.

The rationale behind public utilities was ostensibly to limit the pricing power and profits enjoyed by firms that would otherwise have been “natural monopolies.” In actual practice, by guaranteeing public utilities a “normal profit,” government removed the specter of a loss of revenue and profit associated with discrimination against black customers and employees. Sure enough, public utilities were among the chief practitioners of discrimination against blacks – along with government itself, which also did not fear a loss of profit resulting from its actions.

A recurring effect of government regulation of business in all its forms has been the erosion of competition. Sometimes that has been caused by costly compliance with regulation, driving businesses bankrupt and reducing market competition through attrition. Sometimes this has come from direct government cartelization of competitive markets, resulting from measures like marketing orders and quotas in milk and citrus fruit. Sometimes that has come from price supports, target prices and acreage allotments that have reduced agricultural output and raised prices or, alternatively, raised prices while creating costly surpluses for which taxpayers must pay. Sometimes the reduction in competition results from anti-trust laws like the Robinson Patman Act, deliberately designed to raise prices and restrict competition in retail business.

There is no formal, coherent theory of regulation. Instead, regulatory legislation is accompanied by vague protestations of good will and good intentions that have no unambiguous translation into policy. The typical result is that regulators either take over the role of controlling business decisions from market participants or they become the patrons and protectors of businesses within the industries they regulate. The latter attitude has evolved within the financial sector, where regulators have gradually taken the view that the biggest competitors are “too big to fail.” That is, the effects of failure would spill over onto too many other firms, causing widespread adverse effects. This, in turn, precludes discipline imposed by competitive markets, which force businesses to serve consumers well or go out of business.

The enemy of minorities is government, not free competitive markets. Government harms minorities directly by passing discriminatory laws against them or indirectly by foreclosing or lessening competition.

The Two-Edged Sword of Government Power

Many people find it difficult to perceive government as the threat because government vocally broadcasts its beneficence and cloaks its intentions in the vocabulary of good intentions. It bestows noble and high-sounding names on its legislative enactments. It endows them with historic significance. Like Edmund Rostand’s protagonist Chanticleer, government pretends that its will causes the sun to rise and set and only its benevolence stands between us and disaster.

But the blessings of government are a two-edged sword. “A government powerful enough to give us everything we want is powerful enough to take from us everything we have.” One by one, the beneficiaries of arbitrary government power have been also been stung by the exercise of that same power.

In 1954, government insisted that “separate was inherently unequal” and that the segregated education received by blacks must be inferior to that enjoyed by whites. Instead of introducing competition to schools, government intruded into education more than ever before. Now, six decades later, blacks still struggle for educational parity. And today, it is government that stands in the schoolhouse door to thwart blacks – not through segregation, but by resolutely opposing the educational competition introduced by charter schools in New York City. The overwhelming majority of charter patrons are black, who embrace the charter concept wholeheartedly. But Mayor Bill de Blasio has vowed to fight charter schools tooth and claw. The state and federal governments can be relied upon to sit on their hands, since teacher unions – diehard enemies of charter schools – are a leading constituency of the Democrat Party.

For over a century, blacks have lived and died by government and the Democrat Party. Now they are cut by the other edge of the government sword.

The print and broadcast news media have been cheerleaders for big government and the Democrat Party throughout the 20th century and beyond. First-Amendment absolutism has been a staple of left-wing thought. Recently, FCC regulators in the Obama administration hatched a plan to study journalists and their employers with a view towards tighter regulation. The pretext for the FCC’s Multi-Market Study of Critical Information Needs was that FCC broadcast licenses come with an obligation to serve the public – and how can government determine whether licensees are serving the public without thoroughly studying them? All hell has suddenly broken loose at the prospect that journalists themselves might be subjected to the same stifling regulation as other industries.

Of course, in a competitive market it is quite unnecessary to regulators to “study” the market to gauge whether it is working. Consumers make that judgment themselves. If businesses don’t serve consumers, consumers desert them and the businesses fold. Other businesses take their place and provide better service – or they join their predecessors on the scrap heap. But the presumption of government is that regulation must be necessary to promote competition – otherwise “market failure” will strand consumers up the creek without locomotion.

For decades, the knee-jerk reflex of journalists to any perceived problem has been that “no government regulation exists” to solve it. Now journalists tremble as they test the opposite edge of the government sword.

Now homosexuals are the latest group to successively experience both blades of the government sword. After years of life spent in the shadow of criminal prosecution, homosexuals have witnessed the gradual dismantling of state anti-sodomy laws. State-level bans on marriage by couples of the same gender have been invalidated by the U.S. Supreme Court. Not satisfied with their newly won freedom, homosexuals strive to wield power over their fellow citizens through coercion.

This is the only sense in which George Will was correct. His characterization of homosexuals as “bad winners” was infantile; it portrayed a serious issue of human freedom as a schoolboy exercise in bad manners. But he correctly sensed that homosexuals were winning something – even if he wasn’t quite sure what – and that this latest shift toward subjugating florists was a disastrous change in direction.

What Do Homosexuals Want? What Are They Owed Under the Rule of Law?

The holistic fallacy treats homosexuals as an organic unity with homogeneous wants and goals. In reality, they are individuals with diverse personalities and political orientations. But the homosexual movement follows a clearly discernible left-wing agenda, just as Hispanic activist organizations like La Raza hew to a left-wing line not representative of most Hispanics.

The homosexual political agenda strives to normalize and legitimize homosexual behavior by winning the imprimatur of government and the backing of government force. This movement feeds off the angst of people like Ingersoll and Freed – “It really hurt…it was eating at our souls” – who ache from the sting of rejection. The movement is selling government approval as a psychological substitute for parental and societal approval and economic rents as revenge for rejection. Homosexuals have observed the success of blacks, women and other protected classes in pursuing gains via this route.

There was a time, not so long ago when measured by the relative standard of history, when male homosexuals were not merely criminals but were subjected to a kind of informal “Jim Crow” persecution. They were routinely beaten and rolled not only by ordinary citizens but even by police. It is worth noting that these attitudes began to change decades ago, even before the advent of so-called “affirmative action” programs ostensibly designed to redress the grievances of other victim classes.

The Rule of Law demands that homosexuals receive the same rights and due-process protections as other people. It applies the same standards of consent to all sexual relationships between consenting adults. It grants the same freedom of contract – marital and otherwise – to all.

By the same token, the Rule of Law abhors privilege. It rejects the chimerical notion that the past harms suffered by individual members of groups can be compensated somehow by committing present harms that grant privilege and real income to different members of those same victimized groups.

The Rule of Law and Social Harmony

Sociologists and political scientists used to marvel as the comparative social harmony of American society – achieved despite the astonishing ethnic, racial, religious and political diversity of the citizenry. The consensus assigned credit to the American “melting pot.” The problem with this explanation is that a culture must first exist before new entrants can assimilate within it – and what mechanism achieved the original reconciliation of diverse elements?

Adherence to the Rule of Law within competitive markets made social harmony possible. It allowed the daily exchange of goods and services among individuals in relative anonymity, without disclosure of the multitudinous conflicts that might have otherwise produced stalemate and rejection. Milton Friedman observed astutely that free markets permit us to transact with the butcher, baker and candlestick maker without inquiring into their political or religious convictions. We need agree only on price and quantity. The need for broader consensus would bring ordinary life as we know it to a grinding halt; government would have to step in with coercive power in order to break the stalemate.

When everybody wears their politics, religion and sexual orientation on their sleeves, it makes life unpleasant, worrisome and exhausting. Shouldering chips weighs us down and invites conflict. This is the real source of the “polarization” complained of far and wide, not the relatively trivial differences between Republicans and Democrats. (The two parties are in firm agreement on the desirability of big government; they disagree vehemently only on who will run the show.)

Intellectuals wrongly assumed that the anonymity fostered by the Rule of Law reflected irreconcilable contradictions within society that would eventually cause violence like the Stonewall riots in 1969. The truth was that the Rule of Law reconciled contradictory views of individuals and allowed peaceful social change to occur gradually. Homosexuals were able to live, work and achieve outside of the glare of the public spotlight. It slowly dawned on the American public, at first subliminally and then consciously, that homosexuals were successfully contributing to every segment of American life. The achievements pointed to with pride today by homosexual activists were possible only because the Rule of Law facilitated this gradual, peaceful process. They were not caused by self-righteous activists and an all-powerful government bitch-slapping an ignorant, recalcitrant public into submission.

Subjugating Florists: A Pyrrhic Victory

Free competitive markets cash the checks written by the Rule of Law. Homosexuals have lived and prospered within those free-market boundaries, mirroring the tradition of Jews, blacks and other stigmatized minority groups. For centuries, homosexuals have faced ostracism and even death in various societies around the world. That remains true in certain countries even now. While it is true that homosexuals were formerly treated cruelly in America, it is also true that their cultural, economic and political gains here have been remarkably rapid by historical standards. Historical memory, rather than etiquette, should counsel against trashing the free-market institutions that have midwived that progress.

Violating the Rule of Law in exchange for the power to compel service by businesses would be far worse than a display of bad manners. It would be the worst kind of tradeoff for homosexuals, gaining a temporary political and public-relations triumph at the expense of long-run economic stability.

Of course, homosexual activists are hardly the first or the only ones grasping at the levers of government power. The history of 20th-century America is dominated by such attempts, emanating at first from the political Left but now from the Right as well. It is grimly amusing to recall that early efforts along these lines were hailed by political scientists as encouraging examples of “pluralism” and “inclusiveness” – they were supposed to be signs that the downtrodden and marginalized were now participating in the political process. Today, everybody and his brother-in-law are trying to work local, state or federal government for an edge or a subsidy. Nobody can pretend now that this is anything but the unmistakable indicator of societal disintegration and decay.

Heretofore, the visible traits of democracy – representative government, elections, checks and balances – have been considered both necessary and sufficient to guarantee freedom. The falsity of that presumption is now dawning upon us with the appreciation of democratic absolutism as an impending reality. Subjugating florists may provide the homosexual movement with the thrills of political blood sport but any victories won will prove Pyrrhic.

DRI-259 for week of 2-2-14: Kristallnacht for the Rich: Not Far-Fetched

An Access Advertising EconBrief:

Kristallnacht for the Rich: Not Far-Fetched

Periodically, the intellectual class aptly termed “the commentariat” by The Wall Street Journal works itself into frenzy. The issue may be a world event, a policy proposal or something somebody wrote or said. The latest cause célèbre is a submission to the Journal’s letters column by a partner in one of the nation’s leading venture-capital firms. The letter ignited a firestorm; the editors subsequently declared that Tom Perkins of Kleiner Perkins Caulfield & Byers “may have written the most-read letter to the editor in the history of The Wall Street Journal.”

What could have inspired the famously reserved editors to break into temporal superlatives? The letter’s rhetoric was both penetrating and provocative. It called up an episode in the 20th century’s most infamous political regime. And the response it triggered was rabid.

“Progressive Kristallnacht Coming?”

“…I would call attention to the parallels of fascist Nazi Germany to its war on its “one percent,” namely its Jews, to the progressive war on the American one percent, namely “the rich.” With this ice breaker, Tom Perkins made himself a rhetorical target for most of the nation’s commentators. Even those who agreed with his thesis felt that Perkins had no business using the Nazis in an analogy. The Wall Street Journal editors said “the comparison was unfortunate, albeit provocative.” They recommended reserving Nazis only for rarefied comparisons to tyrants like Stalin.

On the political Left, the reaction was less measured. The Anti-Defamation League accused Perkins of insensitivity. Bloomberg View characterized his letter as an “unhinged Nazi rant.”

No, this bore no traces of an irrational diatribe. Perkins had a thesis in mind when he drew an analogy between Nazism and Progressivism. “From the Occupy movement to the demonization of the rich, I perceive a rising tide of hatred of the successful one percent.” Perkins cited the abuse heaped on workers traveling Google buses from the cities to the California peninsula. Their high wages allowed them to bid up real-estate prices, thereby earning the resentment of the Left. Perkins’ ex-wife Danielle Steele placed herself in the crosshairs of the class warriors by amassing a fortune writing popular novels. Millions of dollars in charitable contributions did not spare her from criticism for belonging to the one percent.

“This is a very dangerous drift in our American thinking,” Perkins concluded. “Kristallnacht was unthinkable in 1930; is its descendant ‘progressive’ radicalism unthinkable now?” Perkins point is unmistakable; his letter is a cautionary warning, not a comparison of two actual societies. History doesn’t repeat itself, but it does rhyme. Kristallnacht and Nazi Germany belong to history. If we don’t mend our ways, something similar and unpleasant may lie in our future.

A Short Refresher Course in Early Nazi Persecution of the Jews

Since the current debate revolves around the analogy between Nazism and Progressivism, we should refresh our memories about Kristallnacht. The name itself translates loosely into “Night of Broken Glass.” It refers to the shards of broken window glass littering the streets of cities in Germany and Austria on the night and morning of November 9-10, 1938. The windows belonged to houses, hospitals, schools and businesses owned and operated by Jews. These buildings were first looted, then smashed by elements of the German paramilitary SA (the Brownshirts) and SS (security police), led by the Gauleiters (regional leaders).

In 1933, Adolf Hitler was elevated to the German chancellorship after the Nazi Party won a plurality of votes in the national election. Almost immediately, laws placing Jews at a disadvantage were passed and enforced throughout Germany. The laws were the official expression of the philosophy of German anti-Semitism that dated back to the 1870s, the time when German socialism began evolving from the authoritarian roots of Otto von Bismarck’s rule. Nazi officialdom awaited a pretext on which to crack down on Germany’s sizable Jewish population.

The pretext was provided by the assassination of German official Ernst vom Rath on Nov. 7, 1938 by a 17-year-old German boy named Herschel Grynszpan. The boy was apparently upset by German policies expelling his parents from the country. Ironically, vom Rath’s sentiments were anti-Nazi and opposed to the persecution of Jews. Von Rath’s death on Nov. 9 was the signal for release of Nazi paramilitary forces on a reign of terror and abduction against German and Austrian Jews. Police were instructed to stand by and not interfere with the SA and SS as long as only Jews were targeted.

According to official reports, 91 deaths were attributed directly to Kristallnacht. Some 30,000 Jews were spirited off to jails and concentration camps, where they were treated brutally before finally winning release some three months later. In the interim, though, some 2-2,500 Jews died in the camps. Over 7,000 Jewish-owned or operated businesses were damaged. Over 1,000 synagogues in Germany and Austria were burned.

The purpose of Kristallnacht was not only wanton destruction. The assets and property of Jews were seized to enhance the wealth of the paramilitary groups.

Today we regard Kristallnacht as the opening round of Hitler’s Final Solution – the policy that produced the Holocaust. This strategic primacy is doubtless why Tom Perkins invoked it. Yet this furious controversy will just fade away, merely another media preoccupation du jour, unless we retain its enduring significance. Obviously, Tom Perkins was not saying that the Progressive Left’s treatment of the rich is now comparable to Nazi Germany’s treatment of the Jews. The Left is not interning the rich in concentration camps. It is not seizing the assets of the rich outright – at least not on a wholesale basis, anyway. It is not reducing the homes and businesses of the rich to rubble – not here in the U.S., anyway. It is not passing laws to discriminate systematically against the rich – at least, not against the rich as a class.

Tom Perkins was issuing a cautionary warning against the demonization of wealth and success. This is a political strategy closely associated with the philosophy of anti-Semitism; that is why his invocation of Kristallnacht is apropos.

The Rise of Modern Anti-Semitism

Despite the politically correct horror expressed by the Anti-Defamation Society toward Tom Perkins’ letter, reaction to it among Jews has not been uniformly hostile. Ruth Wisse, professor of Yiddish and comparative literature at HarvardUniversity, wrote an op-ed for The Wall Street Journal (02/04/2014) defending Perkins.

Wisse traced the modern philosophy of anti-Semitism to the philosopher Wilhelm Marr, whose heyday was the 1870s. Marr “charged Jews with using their skills ‘to conquer Germany from within.’ Marr was careful to distinguish his philosophy of anti-Semitism from prior philosophies of anti-Judaism. Jews “were taking unfair advantage of the emerging democratic order in Europe with its promise of individual rights and open competition in order to dominate the fields of finance, culture and social ideas.”

Wisse declared that “anti-Semitism channel[ed] grievance and blame against highly visible beneficiaries of freedom and opportunity.” “Are you unemployed? The Jews have your jobs. Is your family mired in poverty? The Rothschilds have your money. Do you feel more secure in the city than you did on the land? The Jews are trapping you in the factories and charging you exorbitant rents.”

The Jews were undermining Christianity. They were subtly perverting the legal system. They were overrunning the arts and monopolizing the press. They spread Communism, yet practiced rapacious capitalism!

This modern German philosophy of anti-Semitism long predated Nazism. It accompanied the growth of the German welfare state and German socialism. The authoritarian political roots of Nazism took hold under Otto von Bismarck’s conservative socialism, and so did Nazism’s anti-Semitic cultural roots as well. The anti-Semitic conspiracy theories ascribing Germany’s every ill to the Jews were not the invention of Hitler, but of Wilhelm Marr over half a century before Hitler took power.

The Link Between the Nazis and the Progressives: the War on Success

As Wisse notes, the key difference between modern anti-Semitism and its ancestor – what Wilhelm Marr called “anti-Judaism” – is that the latter abhorred the religion of the Jews while the former resented the disproportionate success enjoyed by Jews much more than their religious observances. The modern anti-Semitic conspiracy theorist pointed darkly to the predominance of Jews in high finance, in the press, in the arts and running movie studios and asked rhetorically: How do we account for the coincidence of our poverty and their wealth, if not through the medium of conspiracy and malefaction? The case against the Jews is portrayed as prima facie and morphs into per se through repetition.

Today, the Progressive Left operates in exactly the same way. “Corporation” is a pejorative. “Wall Street” is the antonym of “Main Street.” The very presence of wealth and high income is itself damning; “inequality” is the reigning evil and is tacitly assigned a pecuniary connotation. Of course, this tactic runs counter to the longtime left-wing insistence that capitalism is inherently evil because it forces us to adopt a materialistic perspective. Indeed, environmentalism embraces anti-materialism to this day while continuing to bunk in with its progressive bedfellows.

We must interrupt with an ironic correction. Economists – according to conventional thinking the high priests of materialism – know that it is human happiness and not pecuniary gain that is the ultimate desideratum. Yet the constant carping about “inequality” looks no further than money income in its supposed solicitude for our well-being. Thus, the “income-inequality” progressives – seemingly obsessed with economics and materialism – are really anti-economic. Economists, supposedly green-eyeshade devotees of numbers and models, are the ones focusing on human happiness rather than ideological goals.

German socialism metamorphosed into fascism. American Progressivism is morphing from liberalism to socialism and – ever more clearly – honing in on its own version of fascism. Both employed the technique of demonization and conspiracy to transform the mutual benefit of free voluntary exchange into the zero-sum result of plunder and theft. How else could productive effort be made to seem fruitless? How else could success be made over into failure? This is the cautionary warning Perkins was sounding.

The Great Exemplar

The great Cassandra of political economy was F.A. Hayek. Early in 1929, he predicted that Federal Reserve policies earlier in the decade would soon bear poisoned fruit in the form of a reduction in economic activity. (His mentor, Ludwig von Mises, was even more emphatic, foreseeing “a great crash” and refusing a prestigious financial post for fear of association with the coming disaster.) He predicted that the Soviet economy would fail owing to lack of a functional price system; in particular, missing capital markets and interest rates. He predicted that Keynesian policies begun in the 1950s would culminate in accelerating inflation. All these came true, some of them within months and some after a lapse of years.

Hayek’s greatest prediction was really a cautionary warning, in the same vein as Tom Perkins’ letter but much more detailed. The 1945 book The Road to Serfdom made the case that centralized economic planning could operate only at the cost of the free institutions that distinguished democratic capitalism. Socialism was really another form of totalitarianism.

The reaction to Hayek’s book was much the same as reaction to Perkins’ letter. Many commentators who should have known better have accused both of them of fascism. They also accused both men of describing a current state of affairs when both were really trying to avoida dystopia.

The flak Hayek took was especially ironic because his book actually served to prevent the outcome he feared. But instead of winning the acclaim of millions, this earned him the scorn of intellectuals. The intelligentsia insisted that Hayek predicted the inevitable succession of totalitarianism after the imposition of a welfare state. When welfare states in Great Britain, Scandinavia, and South America failed to produce barbed wire, concentration camps and German Shepherd dogs, the Left advertised this as proof of Hayek’s “exaggerations” and “paranoia.”

In actual fact, Great Britain underwent many of the changes Hayek had feared and warned against. The notorious “Rules of Engagements,” for instance, were an attempt by a Labor government to centrally control the English labor market – to specify an individual’s work and wage rather than allowing free choice in an impersonal market to do the job. The attempt failed just a dismally as Hayek and other free-market economists had foreseen it would. In the 1980s, it was Hayek’s arguments, wielded by Prime Minister Margaret Thatcher, which paved the way for the rolling back of British socialism and the taming of inflation. It’s bizarre to charge the prophet of doom with inaccuracy when his prophecy is the savior, but that’s what the Left did to Hayek.

Now they are working the same familiar con on Tom Perkins. They begin by misconstruing the nature of his argument. Later, if his warnings are successful, they will use that against him by claiming that his “predictions” were false.

Enriching Perkins’ Argument

This is not to say that Perkins’ argument is perfect. He has instinctively fingered the source of the threat to our liberties. Perkins himself may be rich, but argument isn’t; it is threadbare and skeletal. It could use some enriching.

The war on the wealthy has been raging for decades. The opening battle is lost to history, but we can recall some early skirmishes and some epic brawls prior to Perkins.

In Europe, the war on wealth used anti-Semitism as its spearhead. In the U.S., however, the popularity of Progressives in academia and government made antitrust policy a more convenient wedge for their populist initiatives against success. Antitrust policy was a crown jewel of the Progressive movement in the early 1900s; Presidents Theodore Roosevelt and William Howard Taft cultivated reputations as “trust busters.”

The history of antitrust policy exhibits two pronounced tendencies: the use of the laws to restrict competition for the benefit of incumbent competitors and the use of the laws by the government to punish successful companies for various political reasons. The sobering research of Dominick Armentano shows that antitrust policy has consistently harmed consumer welfare and economic efficiency. The early antitrust prosecution of Standard Oil, for example, broke up a company that had consistently increased its output and lowered prices to consumers over long time spans. The Orwellian rhetoric accompanying the judgment against ALCOA in the 1940s reinforces the notion that punishment, not efficiency or consumer welfare, was behind the judgment. The famous prosecutions of IBM and AT&T in the 1970s and 80s each spawned book-length investigations showing the perversity of the government’s claims. More recently, Microsoft became the latest successful firm to reap the government’s wrath for having the temerity to revolutionize industry and reward consumers throughout the world.

The rise of the regulatory state in the 1970s gave agencies and federal prosecutors nearly unlimited, unsupervised power to work their will on the public. Progressive ideology combined with self-interest to create a powerful engine for the demonization of success. Prosecutors could not only pursue their personal agenda but also climb the career ladder by making high-profile cases against celebrities. The prosecution of Michael Milken of Drexel Burnham Lambert is a classic case of persecution in the guise of prosecution. Milken virtually created the junk-bonk market, thereby originating an asset class that has enhanced the wealth of investors by untold billions or trillions of dollars. For his pains, Milken was sent to jail.

Martha Stewart is a high-profile celebrity who was, in effect, convicted of the crime of being famous. She was charged and convicted of lying to police about a case in which the only crime could have been the offense of insider-trading. But she was the trader and she was not charged with insider-trading. The utter triviality and absence of any damage to consumers or society at large make it clear that she was targeted because of her celebrity; e.g., her success.

Today, the impetus for pursuing successful individuals and companies today comes primarily from the federal level. Harvey Silverglate (author of Three Felonies Per Day) has shown that virtually nobody is safe from the depredations of prosecutors out to advance their careers by racking up convictions at the expense of justice.

Government is the institution charged with making and enforcing law, yet government has now become the chief threat to law. At the state and local level, governments hand out special favors and tax benefits to favored recipients – typically those unable to attain success on their own efforts – while making up the revenue from the earned income of taxpayers at large. At the federal level, Congress fails in its fundamental duty and ignores the law by refusing to pass budgets. The President appoints czars to make regulatory law, while choosing at discretion to obey the provisions of some laws and disregard others. In this, he fails his fundamental executive duty to execute the laws faithfully. Judges treat the Constitution as a backdrop for the expression of their own views rather than as a subject for textual fidelity. All parties interpret the Constitution to suit their own convenience. The overarching irony here is that the least successful institution in America has united in a common purpose against the successful achievers in society.

The most recent Presidential campaign was conducted largely as a jihad against the rich and successful in business. Mitt Romney was forced to defend himself against the charge of succeeding too well in his chosen profession, as well as the corollary accusation that his success came at the expense of the companies and workers in which his private-equity firm invested. Either his success was undeserved or it was really failure. There was no escape from the double bind against which he struggled.

It is clear, than, that the “progressivism” decried by Tom Perkins dates back over a century and that it has waged a war on wealth and success from the outset. The tide of battle has flowed – during the rampage of the Bull Moose, the Depression and New Deal and the recent Great Recession and financial crisis – and ebbed – under Eisenhower and Reagan. Now the forces of freedom have their backs to the sea.

It is this much-richer context that forms the backdrop for Tom Perkins’ warning. Viewed in this panoramic light, Perkins’ letter looks more and more like the battle cry of a counter-revolution than the crazed rant of an isolated one-percenter.