DRI-131 for week of 8-16-15: Hillary on CEO Short-Termism: Three Views

An Access Advertising EconBrief:

Is the Purpose of Government to Eliminate All Sources of Discontent?

If we took every action taken by government at face value, we would be forced to conclude that its central purpose is to eliminate all sources of discontent. And that is exactly the goal set for it by a long-forgotten Labor Party parliamentarian in early 20th-century Great Britain. Is that really what motivates politicians and bureaucrats? Should it be?

Actions taken by state-government regulators in New York raise these questions. Earlier this month, state Attorney General Eric Schneiderman announced that retailer Abercrombie and Fitch was the most prominent of 13 companies to end a work practice known as “on-call scheduling.” The Attorney General (hereinafter, AG) cited pressure by his office as the motivating force behind the change. The practice requires workers to be “on-call” for work in the sense that they must be prepared to show up or stay home on very short notice of as little as one hour. As noted in The Wall Street Journal (“Abercrombie Agrees to End On-Call Scheduling,” 8/7/2015, by Lauren Weber), “workers whose shifts are canceled don’t receive pay, even if they had blocked out that time and made child-care  or other arrangements.”

Abercrombie’s general counsel, Robert Bostrom, described the company’s capitulation by stating that workers will henceforth receive their schedules one week in advance and can choose to receive word about additional shifts that become available on short notice. The new policy, intended to “create as stable and predictable a work environment as possible” for Abercrombie’s employees, will become effective in September in New York and eventually be phased in nationwide.

Why did the Attorney General of New York state choose to intervene in the work-scheduling policies of a baker’s dozen retailers? “Unpredictable work schedules take a toll on all employees, especially those in low wage sectors,” commented Schneiderman, adding that other companies should follow Abercrombie’s “important step.” In April, the AG had claimed that Abercrombie’s policy “potentially” broke a New York law. That law states that staffers who report for scheduled work must receive at least four hours’ pay at minimum wage even if sent home. (Several other states have similar laws.) As the Journal points out, the law was passed before the advent of text messaging and e-mail made it easy to reach most people on short notice. Despite its change of policy, Abercrombie admitted no violation of law.

To an economist, the regulatory action taken by the New York Attorney General’s office and the explanations accompanying it seem utterly inexplicable – unless we are willing to believe that the inherent purpose of government is to eliminate all sources of human discontent.

Why Oppose – That Is, Regulate – On-Call Scheduling?

AG Schneiderman has chosen to regulate on-call scheduling by issuing an unfavorable opinion of this particular work practice, then pressuring firms behind the scenes to drop it. The question is: Why?

According to the Journal, “a number of current and former Abercrombie store associates nationwide left complaints about the scheduling policy on the employer-review site “Glassdoor….” (Parenthetically, we should note that the ghastly use of “a number of” could denote anything from one to infinity and is the kind of elementary error that freshman journalism students are taught to avoid.) Let us stipulate that some workers find the practice of on-call scheduling objectionable. So what? Is the purpose of government to act as a sort of all-purpose complaint department? Or is there something unique, perhaps, about the situation of retail employees – or human labor in general – that requires complaints to be addressed by government rather than directed to management?

As a matter of fact, why don’t workers who find the practice of on-call scheduling objectionable adopt the great American solution open to all workers in a free society; namely, quit and find a different job with working conditions more to their liking?

The Great Fried-Chicken Dilemma

To clarify this problem, consider a much easier problem posed in a much more familiar context.

Consider the problem of consumers confronted with a product they don’t like. Suppose a diner visits a fried chicken restaurant and finds the main course unpalatable. Should the diner complain to management? Well, many restaurants encourage this; it may or may not produce a refund for the diner. Conceivably, it might even result in alteration of the restaurant’s recipe or staff. But chances are that the diner will simply shrug and go somewhere else. After all, there are untold numbers of competing fried-chicken restaurants.

Should we demand that the Federal Trade Commission monitor consumer websites for customer complaints and “crack down” on restaurants that sell “inferior” fried chicken? No, there are huge flaws with this approach to the problem of maintaining restaurant quality. One drawback is that consumer tastes in fried chicken differ; one man’s inferior chicken is another man’s delight. Requiring government to enforce a quality standard in fried chicken will inevitably result in the production of “government quality” fried chicken; that is, one kind of fried chicken that diners of all tastes will have to eat or else. In this case, “or else” means they will have to prepare their own fried chicken. Since they previously had that alternative but rejected it in favor of dining out, this clearly makes them worse off than they would be if they could find a fried chicken to their taste. Of course, we could pretend to solve this problem by having government set up a different quality standard for each different flavor of fried chicken – one for extra crispy, one for spicy, and so on. But this would only create a host of new problems. And it assumes that government is just as responsive to consumer desires as producers are in free markets, whereas our experience tells us that government is quite insensitive to the desires of constituents and tends to impose a “one-size-fits-all” standard on the public whenever it can.

Another obvious drawback is the vast number of fried-chicken restaurants and diners, which would force government to employ huge numbers of people and spend ungodly amounts of time checking out complaints. (Lack of resources also argues against having government set up multiple quality standards for fried chicken, since it would hardly have sufficient time and manpower to enforce one standard, let alone multiple ones.)

Still another drawback would be the inducement sellers would have to file complaints against their competitors. Not only would this tie up government resources in investigating bogus complaints, it would also imperil the workings of competitive markets. If sellers could use government as a tool in falsely branding their competitors’ products as inferior, this would vitiate the very purpose that regulation is intended to serve.

Just think about all the problems we don’t have because we don’t force government to regulate fried-chicken quality in free markets. We don’t have to worry about how many different flavors of fried chicken to allow – are regular, extra-crispy, spicy, and Cajun-style enough, too many or insufficient to satisfy us? Should the number vary in different cities? Counties? States? Regions? Should it change over time, and if so how often? We don’t worry about any of these things. In fact, we take the answers to these questions completely for granted without ever realizing that they might be a problem in the first place. The market takes care of the answers without any of us ever giving the matter a moment’s thought.

Upon consideration, we realize that the mere fact that somebody doesn’t like fried chicken at a restaurant doesn’t necessarily mean that a market failure calling for government regulation has occurred. It might simply mean that the consumer has tasted fried chicken prepared in one of the various ways that don’t suit him; he needs to visit a restaurant better suited to his tastes. Of course, this could be styled a failure of information, but it is certainly not clear that government regulation could have prevented it or could solve it for other consumers. Markets, not governments, are collators and transmitters of information.

If we tried hard enough, we might envision a role for government in such a situation. Maybe the consumer didn’t like the chicken because it was tainted by salmonella. But we have government regulation of health standards in restaurants and preparation standards in chicken plants – and salmonella cases still happen. In reality, markets solve the problem of food poisoning in restaurants by turning restaurants that serve tainted food into commercial pariahs – a disincentive that exceeds any penalty government offers.

The Great Fried-Chicken Dilemma offers vast insight into the problems of labor markets in general and the regulation of on-call scheduling in particular.

The Potential Efficiency Benefits of On-Call Scheduling

Neither Wall Street Journal article nor Attorney General Schneiderman – nor, for that matter, Abercrombie itself – said anything to suggest that the practice of on-call scheduling might actually be beneficial for retail sellers, for consumers and for workers themselves. That omission is startling. There was a reason why Abercrombie and 12 other retail businesses employed this business practice.

Every consumer has patronized a retail seller and knows that these businesses are sometimes bustling with business and sometimes nearly empty. At some point, every consumer has experienced the frustration of seeking a sales clerk in vain. Businesses strive to keep exactly the right number of staff on the floor – not too many, not too few. Depending on the particular good(s) sold, human labor may be the most expensive cost incurred by the business, so it behooves managers to manipulate their “inventory” of sales staff to best advantage.

Just as businesses want to manage their inventory of sales staff optimally, so they also want to keep just the right amount inventory of goods on their shelves. For centuries, this was one of the biggest headaches facing the average business. Economists even identified the phenomenon called an “inventory recession,” caused by too many businesses simultaneously overestimating the need for future inventories and producing far more goods than were needed – only to find shelves and warehouses full to overflowing when consumer demand did not keep pace with expectations. Recent technological innovations in transportation, logistics and computers have allowed business to employ an inventory scheduling practice called “just in time” inventory management. This allowed businesses to postpone restocking until the last minute, letting them gauge demand much more accurately and avoiding the necessity for accurate long-distance forecasting of inventory needs.

If we view a retail business’s roster of employees as its staffing “inventory,” it is clear that on-call scheduling is a kind of “just-in-time” program for staffing. It allows retail managers to postpone determination of their final staffing schedule until the point when they can gauge the demand for retail staffing much more accurately. This allows them to avoid paying superfluous clerks when the store is virtually empty while having extra clerks on hand when demand is unexpectedly strong. It is crystal clear that on-call scheduling is potentially very beneficial for a retail business.

Moreover, it should be equally clear that on-call scheduling benefits consumers, too. This is a case where the interests of consumers and those of the business are directly aligned. Consumers want to have extra clerks on hand at busy times but don’t benefit much, if at all, from the presence of superfluous clerks in slack times. In the long run, competition between retail businesses will insure that the benefits of lower costs are passed along to consumers in the form of lower prices, so the efficiency gains from on-call scheduling really go to consumers, even though we associate the concept of business efficiency with productive advantage and gains to business owners.

What obviously failed to occur to AG Schneiderman, the Wall Street Journal and (from outward appearances) even Abercrombie and its spokesman is that on-call scheduling is also a potential source of benefits to retail employees. Just as consumers in our fried-chicken example derive benefits from product differentiation, so also may workers derive benefits from different terms of employment and work environments. Retail sales work is generally viewed as a form of low-skilled labor. Economists treat low-skilled labor as homogeneous; that is, as indistinguishable. But on-call scheduling allows workers the chance either to accept employment or, alternatively, earn a higher wage by competing on the basis of willingness to work – or forego work – on short notice. Since the decision to work for a particular employer is voluntary, nobody is forced to take this offer – just as no consumer is forced to eat fried chicken they don’t like. There are countless retail sellers, so workers who don’t relish the practice of on-call scheduling can work for a business that doesn’t follow the practice – just as consumers who don’t like one variety of fried chicken can patronize one of the many other competing brands extant.

So Why Regulate On-Call Scheduling?

On-call scheduling offers potential benefits to retail businesses, consumers and even to retail workers – just as different types of products offer potential benefits to consumers. Nobody is forced to endure on-call scheduling if they don’t like it, since the large number of retail businesses competing for workers gives workers a wide choice of employment – just as consumers have wide choices of different products and aren’t forced to put up with a particular brand. If it would be incredibly wasteful and a huge mistake to regulate brand variety and quality of consumer goods – and it would – wouldn’t it be just as big a mistake to regulate the practice of on-call scheduling for analogous reasons?

The answer is yes. There is no earthly reason for government at any level – municipal, state or federal – to regulate the practice of on-call scheduling. Only bad can come of it. The implication of AG Schneiderman’s actions is that government has a duty to prevent human beings within its jurisdiction from experiencing even momentary discontent. The AG must consider workers to be either too stupid to act in their own interest or too helpless to do so even if they had the wit to perceive it. Left unspecified, however, is how or where the AG acquired the superior wisdom and knowledge to substitute his judgment for those of the workers whose interests he claims to represent.

Free Markets vs. Regulation

We have shown that various ways of producing goods and services (such as utilizing on-call scheduling to staff retailing establishments) and various types of goods (such as different varieties and flavors of fried chicken) offer potential benefits to consumers. How is that potential actuated; that is, how do we cross the bridge from “potential” to “actual?”

Apparently, there are two ways. We can give the processes and products a trial in the free market and see how they work, keeping the ones that succeed and discarding the ones that fail. The failures will lose money for sellers because consumers will reject them, either because they do not like them or because they are too expensive. That makes it easy for producers to discard them. Alternatively, regulators can accept or reject them on an a priori basis. In order for this method to succeed, regulators must know as much about technology and costs as the producers of the affected goods and services do. Regulators also must know as much about consumers’ tastes and preferences as the consumers themselves do – as well as knowing what is “good” for consumers to consume in a physiological and moral sense. In other words, regulators must be well-nigh omniscient. (Where input markets are directly affected, as in this case, we can treat workers as the “consumers” of the relevant process.)

Put in this way, the choice is as clear as two-way glass. Free markets work vastly better and are less expensive than regulation. Given this, why do governments leap to regulate at every opportunity?

Why Governments Almost Always Choose Regulation

The New York State Attorney General chose to regulate on-call scheduling for a reason. Based on our analysis, we might suppose him to be perverse – deliberately choosing a result that makes everybody worse off than before. But that is not so. Economics tells us that somebody has to be better off, and the first place to look for the beneficiary or beneficiaries would be the AG himself and his sponsors and constituents.

The AG is a bureaucrat, a denizen of state government. He benefits when his domain grows larger and his power over it increases. When the number of firms he regulates increases, the AG’s power increases and his budget increases or, more properly, his basis for demanding a budget and staffing increase strengthens. When the AG’s office regulates the processes employed by retail firms, preventing them from using innovative means to compete with other firms, state government is cartelizing what would otherwise be a competitive market. The result of this will be less output and higher prices in the retail-sales sector. This creates a constituency of business owners and managers who are beholden to the AG and state-government politicians. (In the broad sense, this is what happened for over four decades when the old Civil Aeronautics Board cartelized interstate airline travel in the United States between the 1930s and 1978.)

Notice that the list of beneficiaries from regulation of on-call scheduling is small compared to the roster of potential beneficiaries from unregulated on-call scheduling. Regulation benefits government bureaucrats, workers and politicians directed involved with the affected industry, along with business owners who gain from market cartelization. It harms everybody else, most notably the consumers of the good involved and (in this case) almost certainly the workers affected as well. The gains of business owners are probably temporary, but the gains accruing to government will last as long as government regulation continues.

The best way to visualize the actions of government vis a vis markets is by thinking of government as entrepreneurship in reverse. Politicians and bureaucrats are always alert for opportunities to expand their domain. But whereas the invisible hand of competition and voluntary exchange insures that free-market entrepreneurship creates broad, mutual benefits, the coercive, visible hand of government subtracts net value from almost all of its interchanges with markets.

Is the Purpose of Government to Eliminate All Sources of Discontent?

Now we understand the heretofore inexplicable contention that the purpose of government is to eliminate all sources of discontent. How could anybody be so naïve as to think that government has the ability to remedy all unhappiness? Doesn’t the speaker realize that his statement is a recipe for fiscal insolvency? Writing a blank check to government is a fruitless quest for a non-existent nirvana.

Alas, the author of those words didn’t particularly care whether government actually succeeded in eliminating any discontent or not. He was not striving for universal bliss. Rather he sought an unlimited warrant for government intrusion in order to benefit his own special interest. The more power government has, the larger it grows. The larger it grows, the more its servants prosper. And the more the servants of government prosper, the more the rest of us suffer.

DRI-172 for week of 7-5-15: How and Why Did ObamaCare Become SCOTUSCare?

An Access Advertising EconBrief:

How and Why Did ObamaCare Become SCOTUSCare?

On June 25, 2015, the Supreme Court of the United States delivered its most consequential opinion in recent years in King v. Burwell. King was David King, one of various Plaintiffs opposing Sylvia Burwell, Secretary of Health, Education and Welfare. The case might more colloquially be called “ObamaCare II,” since it dealt with the second major attempt to overturn the Obama administration’s signature legislative achievement.

The Obama administration has been bragging about its success in attracting signups for the program. Not surprisingly, it fails to mention two facts that make this apparent victory Pyrrhic. First, most of the signups are people who lost their previous health insurance due to the law’s provisions, not people who lacked insurance to begin with. Second, a large chunk of enrollees are being subsidized by the federal government in the form of a tax credit for the amount of the insurance.

The point at issue in King v. Burwell is the legality of this subsidy. The original legislation provides for health-care exchanges established by state governments, and proponents have been quick to cite these provisions to pooh-pooh the contention that the Patient Protection and Affordable Care Act (PPACA) ushered in a federally-run, socialist system of health care. The specific language used by PPAACA in Section 1401 is that the IRS can provide tax credits for insurance purchased on “exchanges run by the State.” That phrase appears 14 times in Section 1401 and each time it clearly refers to state governments, not the federal government. But in actual practice, states have found it excruciatingly difficult to establish these exchanges and many states have refused to do so. Thus, people in those states have turned to the federal-government website for health insurance and have nevertheless received a tax credit under the IRS’s interpretation of statute 1401. That interpretation has come to light in various lawsuits heard by lower courts, some of which have ruled for plaintiffs and against attempts by the IRS and the Obama administration to award the tax credits.

Without the tax credits, many people on both sides of the political spectrum agree, PPACA will crash and burn. Not enough healthy people will sign up for the insurance to subsidize those with pre-existing medical conditions for whom PPACA is the only source of external funding for medical treatment.

To a figurative roll of drums, the Supreme Court of the United States (SCOTUS) released its opinion on June 25, 2015. It upheld the legality of the IRS interpretation in a 6-3 decision, finding for the government and the Obama administration for the second time. And for the second time, the opinion for the majority was written by Chief Justice John Roberts.

Roberts’ Rules of Constitutional Disorder

Given that Justice Roberts had previously written the opinion upholding the constitutionality of the law, his vote here cannot be considered a complete shock. As before, the shock was in the reasoning he used to reach his conclusion. In the first case (National Federation of Independent Businesses v. Sebelius, 2012), Roberts interpreted a key provision of the law in a way that its supporters had categorically and angrily rejected during the legislative debate prior to enactment and subsequently. He referred to the “individual mandate” that uninsured citizens must purchase health insurance as a tax. This rescued it from the otherwise untenable status of a coercive consumer directive – something not allowed under the Constitution.

Now Justice Roberts addressed the meaning of the phrase “established by the State.” He did not agree with one interpretation previously made by the government’s Solicitor General, that the term was an undefined term of art. He disdained to apply a precedent established by the Court in a previous case involving interpretation of law by administration agencies, the Chevron case. The precedent said that in cases where a phrase was ambiguous, a reasonable interpretation by the agency charged with administering the law would rule. In this case, though, Roberts claimed that since “the IRS…has no expertise in crafting health-insurance policy of this sort,” Congress could not possibly have intended to grant the agency this kind of discretion.

No, Roberts is prepared to believe that “established by the State” does not mean “established by the federal government,” all right. But he says that the Supreme Court cannot interpret the law this way because it will cause the law to fail to achieve its intended purpose. So, the Court must treat the wording as ambiguous and interpret it in such a way as to advance the goals intended by Congress and the administration. Hence, his decision for defendant and against plaintiffs.

In other words, he rejected the ability of the IRS to interpret the meaning of the phrase “established by the State” because of that agency’s lack of health-care-policy expertise, but is sufficiently confident of his own expertise in that area to interpret its meaning himself; it is his assessment of the market consequences that drives his decision to uphold the tax credits.

Roberts’ opinion prompted one of the most scathing, incredulous dissents in the history of the Court, by Justice Antonin Scalia. “This case requires us to decide whether someone who buys insurance on an exchange established by the Secretary gets tax credits,” begins Scalia. “You would think the answer would be obvious – so obvious that there would hardly be a need for the Supreme Court to hear a case about it… Under all the usual rules of interpretation… the government should lose this case. But normal rules of interpretation seem always to yield to the overriding principle of the present Court – the Affordable Care Act must be saved.”

The reader can sense Scalia’s mounting indignation and disbelief. “The Court interprets [Section 1401] to award tax credits on both federal and state exchanges. It accepts that the most natural sense of the phrase ‘an exchange established by the State’ is an exchange established by a state. (Understatement, thy name is an opinion on the Affordable Care Act!) Yet the opinion continues, with no semblance of shame, that ‘it is also possible that the phrase refers to all exchanges.’ (Impossible possibility, thy name is an opinion on the Affordable Care Act!)”

“Perhaps sensing the dismal failure of its efforts to show that ‘established by the State’ means ‘established by the State and the federal government,’ the Court tries to palm off the pertinent statutory phrase as ‘inartful drafting.’ The Court, however, has no free-floating power to rescue Congress from their drafting errors.” In other words, Justice Roberts has rewritten the law to suit himself.

To reinforce his conclusion, Scalia concludes with “…the Court forgets that ours is a government of laws and not of men. That means we are governed by the terms of our laws and not by the unenacted will of our lawmakers. If Congress enacted into law something different from what it intended, then it should amend to law to conform to its intent. In the meantime, Congress has no roving license …to disregard clear language on the view that … ‘Congress must have intended’ something broader.”

“Rather than rewriting the law under the pretense of interpreting it, the Court should have left it to Congress to decide what to do… [the] Court’s two cases on the law will be remembered through the years. And the cases will publish the discouraging truth that the Supreme Court favors some laws over others and is prepared to do whatever it takes to uphold and assist its favorites… We should start calling this law SCOTUSCare.”

Jonathan Adler of the much-respected and quoted law blog Volokh Conspiracy put it this way: “The umpire has decided that it’s okay to pinch-hit to ensure that the right team wins.”

And indeed, what most stands out about Roberts’ opinion is its contravention of ordinary constitutional thought. It is not the product of a mind that began at square one and worked its way methodically to a logical conclusion. The reader senses a reversal of procedure; the Chief Justice started out with a desired conclusion and worked backwards to figure out how to justify reaching it. Justice Scalia says as much in his dissent. But Scalia does not tell us why Roberts is behaving in this manner.

If we are honest with ourselves, we must admit that we do not know why Roberts is saying what he is saying. Beyond question, it is arbitrary and indefensible. Certainly it is inconsistent with his past decisions. There are various reasons why a man might do this.

One obvious motivation might be that Roberts is being blackmailed by political supporters of the PPACA, within or outside of the Obama administration. Since blackmail is not only a crime but also a distasteful allegation to make, nobody will advance it without concrete supporting evidence – not only evidence against the blackmailer but also an indication of his or her ammunition. The opposite side of the blackmail coin is bribery. Once again, nobody will allege this publicly without concrete evidence, such as letters, tapes, e-mails, bank account or bank-transfer information. These possibilities deserve mention because they lie at the head of a short list of motives for betrayal of deeply held principles.

Since nobody has come forward with evidence of malfeasance – or is likely to – suppose we disregard that category of possibility. What else could explain Roberts’ actions? (Note the plural; this is the second time he has sustained PPACA at the cost of his own integrity.)

Lord Acton Revisited

To explain John Roberts’ actions, we must develop a model of political economy. That requires a short side trip into the realm of political philosophy.

Lord Acton’s famous maxim is: “Power corrupts; absolute power corrupts absolutely.” We are used to thinking of it in the context of a dictatorship or of an individual or institution temporarily or unjustly wielding power. But it is highly applicable within the context of today’s welfare-state democracies.

All of the Western industrialized nations have evolved into what F. A. Hayek called “absolute democracies.” They are democratic because popular vote determines the composition of representative governments. But they are absolute in scope and degree because the administrative agencies staffing those governments are answerable to no voter. And increasingly the executive, legislative and judicial branches of the governments wield powers that are virtually unlimited. In practical effect, voters vote on which party will wield nominal executive control over the agencies and dominate the legislature. Instead of a single dictator, voters elect a government body with revolving and rotating dictatorial powers.

As the power of government has grown, the power at stake in elections has grown commensurately. This explains the burgeoning amounts of money spent on elections. It also explains the growing rancor between opposing parties, since ordinary citizens perceive the loss of electoral dominance to be subjugation akin to living under a dictatorship. But instead of viewing this phenomenon from the perspective of John Q. Public, view it from within the brain of a policymaker or decisionmaker.

For example, suppose you are a completely fictional Chairman of a completely hypothetical Federal Reserve Board. We will call you “Bernanke.” During a long period of absurdly low interest rates, a huge speculative boom has produced unprecedented levels of real-estate investment by banks and near-banks. After stoutly insisting for years on the benign nature of this activity, you suddenly perceive the likelihood that this speculative boom will go bust and some indeterminate number of these financial institutions will become insolvent. What do you do? 

Actually, the question is really more “What do you say?” The actions of the Federal Reserve in regulating banks, including those threatened with or undergoing insolvency, are theoretically set down on paper, not conjured up extemporaneously by the Fed Chairman every time a crisis looms. These days, though, the duties of a Fed Chairman involve verbal reassurance and massage as much as policy implementation. Placing those duties in their proper light requires that our side trip be interrupted with a historical flashback.

Let us cast our minds back to 1929 and the onset of the Great Depression in the United States. At that time, virtually nobody foresaw the coming of the Depression – nobody in authority, that is. For many decades afterwards, the conventional narrative was that President Herbert Hoover adopted a laissez faire economic policy, stubbornly waiting for the economy to recover rather than quickly ramping up government spending in response to the collapse of the private sector. Hoover’s name became synonymous with government passivity in the face of adversity. Makeshift shanties and villages of the homeless and dispossessed became known as “Hoovervilles.”

It took many years to dispel this myth. The first truthteller was economist Murray Rothbard in his 1962 book America’s Great Depression, who pointed out that Hoover had spent his entire term in a frenzy of activism. Far from remaining a pillar of fiscal rectitude, Hoover had presided over federal deficit spending so large that his successor, Democrat Franklin Delano Roosevelt, campaigned on a platform of balancing the federal-government budget. Hoover sternly warned corporate executives not to lower wages and officially adopted an official stance in favor of inflation.

Professional economists ignored Rothbard’s book in droves, as did reviewers throughout the mass media. Apparently the fact that Hoover’s policies failed to achieve their intended effects persuaded everybody that he couldn’t have actually followed the policies he did – since his actual policies were the very policies recommended by mainstream economists to counteract the effects of recession and Depression and were largely indistinguishable in kind, if not in degree, from those followed later by Roosevelt.

The anathematization of Herbert Hoover drover Hoover himself to distraction. The former President lived another thirty years, to age ninety, stoutly maintaining his innocence of the crime of insensitivity to the misery of the poor and unemployed. Prior to his presidency, Hoover had built reputation as one of the great humanitarians of the 20th century by deploying his engineering and organizational skills in the cause of disaster relief across the globe. The trashing of his reputation as President is one of history’s towering ironies. As it happened, his economic policies were disastrous, but not because he didn’t care about the people. His failure was ignorance of economics – the same sin committed by his critics.

Worse than the effects of his policies, though, was the effect his demonization has had on subsequent policymakers. We do not remember the name of the captain of the California, the ship that lay anchored within sight of the Titanic but failed to answer distress calls and go to the rescue. But the name of Hoover is still synonymous with inaction and defeat. In politics, the unforgivable sin became not to act in the face of any crisis, regardless of the consequences.

Today, unlike in Hoover’s day, the Chairman of the Federal Reserve Board is the quarterback of economic policy. This is so despite the Fed’s ambiguous status as a quasi-government body, owned by its member banks with a leader appointed by the President. Returning to our hypothetical, we ponder the dilemma faced by the Chairman, “Bernanke.”

Bernanke only directly controls monetary policy and bank regulation. But he receives information about every aspect of the U.S. economy in order to formulate Fed policy. The Fed also issues forecasts and recommendations for fiscal and regulatory policies. Even though the Federal Reserve is nominally independent of politics and from the Treasury department of the federal government, the Fed’s policies affect and are affected by government policies.

It might be tempting to assume that Fed Chairmen know what is going to happen in the economic future. But there is no reason to believe that is true. All we need do is examine their past statements to disabuse ourselves of that notion. Perhaps the popping of the speculative bubble that Bernanke now anticipates will produce an economic recession. Perhaps it will even topple the U.S. banking system like a row of dominoes and produce another Great Depression, a la 1929. But we cannot assume that either. The fact that we had one (1) Great Depression is no guarantee that we will have another one. After all, we have had 36 other recessions that did not turn into Great Depressions. There is nothing like a general consensus on what caused the Depression of the 1920s and 30s. (The reader is invited to peruse the many volumes written by historians, economic and non-, on the subject.) About the only point of agreement among commentators is that a large number of things went wrong more or less simultaneously and all of them contributed in varying degrees to the magnitude of the Depression.

Of course, a good case might be made that it doesn’t matter whether Fed Chairman can foresee a coming Great Depression or not. Until recently, one of the few things that united contemporary commentators was their conviction that another Great Depression was impossible. The safeguards put in place in response to the first one had foreclosed that possibility. First, “automatic stabilizers” would cause government spending to rise in response to any downturn in private-sector spending, thereby heading off any cumulative downward movement in investment and consumption in response to failures in the banking sector. Second, the Federal Reserve could and would act quickly in response to bank failures to prevent the resulting reverse-multiplier effect on the money supply, thereby heading off that threat at the pass. Third, bank regulations were modified and tightened to prevent failures from occurring or restrict them to isolated cases.

Yet despite everything written above, we can predict confidently that our fictional “Bernanke” would respond to a hypothetical crisis exactly as the real Ben Bernanke did respond to the crisis he faced and later described in the book he wrote about it. The actual and predicted responses are the same: Scare the daylights out of the public by predicting an imminent Depression of cataclysmic proportions and calling for massive government spending and regulation to counteract it. Of course, the real-life Bernanke claimed that he and Treasury Secretary Henry O’Neill correctly foresaw the economic future and were heroically calling for preventive measures before it was too late. But the logic we have carefully developed suggests otherwise.

Nobody – not Federal Reserve Chairmen or Treasury Secretaries or California psychics – can foresee Great Depressions. Predicting a recession is only possible if the cyclical process underlying it is correctly understood, and there is no generally accepted theory of the business cycle. No, Bernanke and O’Neill were not protecting America with their warning; they were protecting themselves. They didn’t know that a Great Depression was in the works – but they did know that they would be blamed for anything bad that did happen to the economy. Their only way of insuring against that outcome – of buying insurance against the loss of their jobs, their professional reputations and the possibility of historical “Hooverization” – was to scream for the biggest possible government action as soon as possible. 

Ben Bernanke had been blasé about the effects of ultra-low interest rates; he had pooh-poohed the possibility that the housing boom was a bubble that would burst like a sonic boom with reverberations that would flatten the economy. Suddenly he was confronted with a possibility that threatened to make him look like a fool. Was he icy cool, detached, above all personal considerations? Thinking only about banking regulations, national-income multipliers and the money supply? Or was he thinking the same thought that would occur to any normal human being in his place: “Oh, my God, my name will go down in history as the Herbert Hoover of Fed chairmen”?

Since the reasoning he claims as his inspiration is so obviously bogus, it is logical to classify his motives as personal rather than professional. He was protecting himself, not saving the country. And that brings us to the case of Chief Justice John Roberts.

Chief Justice John Roberts: Selfless, Self-Interested or Self-Preservationist?

For centuries, economists have identified self-interest as the driving force behind human behavior. This has exasperated and even angered outside observers, who have mistaken self-interest for greed or money-obsession. It is neither. Rather, it merely recognizes that the structure of the human mind gives each of us a comparative advantage in the promotion of our own welfare above that of others. Because I know more about me than you do, I can make myself happier than you can; because you know more about you than I do, you can make yourself happier than I can. And by cooperating to share our knowledge with each other, we can make each other happier through trade than we could be if we acted in isolation – but that cooperation must preserve the principle of self-interest in order to operate efficiently.

Strangely, economists long assumed that the same people who function well under the guidance of self-interest throw that principle to the winds when they take up the mantle of government. Government officials and representatives, according to traditional economics textbooks, become selfless instead of self-interested when they take office. Selflessness demands that they put the public welfare ahead of any personal considerations. And just what is the “public welfare,” exactly? Textbooks avoided grappling with this murky question by hiding behind notions like a “social welfare function” or a “community indifference curve.” These are examples of what the late F. A. Hayek called “the pretense of knowledge.”

Beginning in the 1950s, the “public choice” school of economics and political science was founded by James Buchanan and Gordon Tullock. This school of thought treated people in government just like people outside of government. It assumed that politicians, government bureaucrats and agency employees were trying to maximize their utility and operating under the principle of self-interest. Because the incentives they faced were radically different than those faced by those in the private sector, outcomes within government differed radically from those outside of government – usually for the worse.

If we apply this reasoning to members of the Supreme Court, we are confronted by a special kind of self-interest exercised by people in a unique position of power and authority. Members of the Court have climbed their career ladder to the top; in law, there are no higher rungs. This has special economic significance.

When economists speak of “competition” among input-suppliers, we normally speak of people competing with others doing the same job for promotion, raises and advancement. None of these are possible in this context. What about more elevated kinds of recognition? Well, there is certainly scope for that, but only for the best of the best. On the current court, positive recognition goes to those who write notable opinions. Only Judge Scalia has the special talent necessary to stand out as a legal scholar for the ages. In this sense, Judge Scalia is “competing” with other judges in a self-interested way when he writes his decisions, but he is not competing with his fellow judges. He is competing with the great judges of history – John Marshall, Oliver Wendell Holmes, Louis Brandeis, and Learned Hand – against whom his work is measured. Otherwise, a judge can stand out from the herd by providing the deciding or “swing” vote in close decisions. In other words, he can become politically popular or unpopular with groups that agree or disagree with his vote. Usually, that results in transitory notoriety.

But in historic cases, there is the possibility that it might lead to “Hooverization.”

The bigger government gets, the more power it wields. More government power leads to more disagreement about its role, which leads to more demand to arbitration by the Supreme Court. This puts the Court in the position of deciding the legality of enactments that claim to do great things for people while putting their freedoms and livelihoods in jeopardy. Any judge who casts a deciding vote against such a measure will go down in history as “the man who shot down” the Great Bailout/the Great Health Care/the Great Stimulus/the Great Reproductive Choice, ad infinitum.

Almost all Supreme Court justices have little to gain but a lot to lose from opposing a measure that promotes government power. They have little to gain because they cannot advance further or make more money and they do not compete with J. Marshall, Holmes, Brandeis or Hand. They have a lot to lose because they fear being anathematized by history, snubbed by colleagues, picketed or assassinated in the present day, and seeing their children brutalized by classmates or the news media. True, they might get satisfaction from adhering to the Constitution and their personal conception of justice – if they are sheltered under the umbrella of another justice’s opinion or they can fly under the radar of media scrutiny in a relatively low-profile case.

Let us attach a name to the status occupied by most Supreme Court justices and to the spirit that animates them. It is neither self-interest nor selflessness in their purest forms; we shall call it self-preservation. They want to preserve the exalted status they enjoy and they are not willing to risk it; they are willing to obey the Constitution, observe the law and speak the truth but only if and when they can preserve their position by doing so. When they are threatened, their principles and convictions suddenly go out the window and they will say and do whatever it takes to preserve what they perceive as their “self.” That “self” is the collection of real income, perks, immunities and prestige that go with the status of Supreme Court Justice.

Supreme Court Justice John Roberts is an example of the model of self-preservation. In both of the ObamaCare decisions, his opinions for the majority completely abdicated his previous conservative positions. They plumbed new depths of logical absurdity – legal absurdity in the first decision and semantic absurdity in the second one. Yet one day after the release of King v. Burwell, Justice Roberts dissented in the Obergefell case by chiding the majority for “converting personal preferences into constitutional law” and disregarding clear meaning of language in the laws being considered. In other words, he condemned precisely those sins he had himself committed the previous day in his majority opinion in King v. Burwell.

For decades, conservatives have watched in amazement, scratching their heads and wracking their brains as ostensibly conservative justices appointed by Republican presidents unexpectedly betrayed their principles when the chips were down, in high-profile cases. The economic model developed here lays out a systematic explanation for those previously inexplicable defections. David Souter, Anthony Kennedy, John Paul Stevens and Sandra Day O’Connor were the precursors to John Roberts. These were not random cases. They were the systematic workings of the self-preservationist principle in action.

DRI-191 for week of 3-15-15: More Ghastly than Beheadings! More Dangerous than Nuclear Proliferation! Its…Cheap Foreign Steel!

An Access Advertising EconBrief:

More Ghastly than Beheadings! More Dangerous than Nuclear Proliferation! Its…Cheap Foreign Steel!

The economic way to view news is as a product called information. Its value is enhanced by adding qualities that make it more desirable. One of these is danger. Humans react to threats and instinctively weigh the threat-potential of any problematic situation. That is why headlines of print newspapers, radio-news updates, TV evening-news broadcasts and Internet websites and blogs all focus disproportionately on dangers.

This obsession with danger does not jibe with the fact that human life expectancy had doubled over the last century and that violence has never been less threatening to mankind than today. Why do we suffer this cognitive dissonance? Our advanced state of knowledge allows us to identify and categorize threats that passed unrecognized for centuries. Today’s degraded journalistic product, more poorly written, edited and produced than formerly, plays on our neuroscientific weaknesses.

Economists are acutely sensitive to this phenomenon. Our profession made its bones by exposing the bogey of “the evil other” – foreign trade, foreign goods, foreign labor and foreign investment as ipso facto evil and threatening. Yet in spite of the best efforts of economists from Adam Smith to Milton Friedman, there is no more dependable pejorative than “foreign” in public discourse. (The word “racist” is a contender for the title, but overuse has triggered a backlash among the public.)

Thus, we shouldn’t be surprised by this headline in The Wall Street Journal: “Ire Rises at China Over Glut of Steel” (03/16/2015, By Biman Mukerji in Hong Kong, John W. Miller in Pittsburgh and Chuin-Wei Yap in Beijing). Surprised, no; outraged, yes.

The Big Scare 

The alleged facts of the article seem deceptively straightforward. “China produces as much steel as the rest of the world combined – more than four times as much as the peak U.S. production in the 1970s.” Well, inasmuch as (a) the purpose of all economic activity is to produce goods for consumption; and (b) steel is a key input in producing countless consumption goods and capital goods, ranging from vehicles to buildings to weapons to cutlery to parts, this would seem to be cause for celebration rather than condemnation. Unfortunately…

“China’s massive steel-making engine, determined to keep humming as growth cools at home, is flooding the world with exports, spurring steel producers around the globe to seek government protection from falling prices. From the European Union to Korea and India, China’s excess metal supply is upending trade patterns and heating up turf battles among local steelmakers. In the U.S., the world’s second-biggest steel consumer, a fresh wave of layoffs is fueling appeals for tariffs. U.S. steel producers such as U.S. Steel Corp. and Nucor Corp. are starting to seek political support for trade action.”

Hmmm. Since this article occupies the place of honor on the world’s foremost financial publication, we expect it to be authoritative. China has a “massive steel-making engine” – well, that stands to reason, since it’s turning out as much steel as everybody else put together. It is “determined to keep humming.” The article’s three (!) authors characterize the Chinese steelmaking establishment as a machine, which seems apropos. They then endow the metaphoric machine with the human quality of determination – bad writing comes naturally to poor journalists.

This determination is linked with “cooling” growth. Well, the only cooling growth that Journal readers can be expected to infer at this point is the slowing of the Chinese government’s official rate of annual GDP growth from 7.5% to 7%. Leaving aside the fact that the rest of the industrialized world is pining for growth of this magnitude, the authors are not only mixing their metaphors but mixing their markets as well. The only growth directly relevant to the points raised here – exports by the Chinese and imports by the rest of the world – is growth in the steel market specifically. The status of the Chinese steel market is hardly common knowledge to the general public. (Later, the authors eventually get around to the steel market itself.)

So the determined machine is reacting to cooling growth by “flooding the world with exports,” throwing said world into turmoil. The authors don’t treat this as any sort of anomaly, so we’re apparently expected to nod our heads grimly at this unfolding danger. But why? What is credible about this story? And what is dangerous about it?

Those of us who remember the 1980s recall that the monster threatening the world economy then was Japan, the unstoppable industrial machine that was “flooding the world” with imports. (Yes, that’s right – the same Japan whose economy has been lying comatose for twenty years.) The term of art was “export-led growth.” Now these authors are telling us that massive exports are a reaction to weakness rather than a symptom of growth.

“Unstoppable” Japan suddenly stopped in its tracks. No country has ever ascended an economic throne based on its ability to subsidize the consumption of other nations. Nor has the world ever died of economic indigestion caused by too many imports produced by one country. The story told at the beginning of this article lacks any vestige of economic sense or credibility. It is pure journalistic scare-mongering. Nowhere do the authors employ the basic tools of international economic analysis. Instead, they employ the basic tools of scarifying yellow journalism.

The Oxymoron of “Dumping” 

The authors have set up their readers with a menacing specter described in threatening language. A menace must have victims. So the authors identify the victims. Victims must be saved, so the authors bring the savior into their story. Naturally, the savior is government.

The victims are “steel producers around the globe.” They are victimized by “falling prices.” The authors are well aware that they have a credibility problem here, since their readers are bound to wonder why they should view falling steel prices as a threat to them. As consumers, they see falling prices as a good thing. As prices fall, their real incomes rise. Falling prices allow consumers to buy more goods and services with their money incomes. Businesses buy steel. Falling steel prices allow businesses to buy more steel. So why are falling steel prices a threat?

Well, it turns out that falling steel prices are a threat to “chief executives of leading American steel producers,” who will “testify later this month at a Congressional Steel Caucus hearing.” This is “the prelude to launching at least one anti-dumping complaint with the International Trade Commission.” And what is “dumping?” “‘Dumping,’ or selling abroad below the cost of production to gain market share, is illegal under World Trade Organization law and is punishable with tariffs.”

After this operatic buildup, it turns out that the foreign threat to America spearheaded by a gigantic, menacing foreign power is… low prices. Really low prices. Visualize buying steel at Costco or Wal Mart.

Oh, no! Not that. Head for the bomb shelters! Break out the bug-out bags! Get ready to live off the grid!

The inherent implication of dumping is oxymoronic because the end-in-view behind all economic activity is consumption. A seller who sells for an abnormally low price is enhancing the buyer’s capability to consume, not damaging it. If anybody is “damaged” here, it is the seller, not the buyer. And that begs the question, why would a seller do something so foolish?

More often than not, proponents of the dumping thesis don’t take their case beyond the point of claiming damage to domestic import-competing firms. (The three Journal reporters make no attempt whatsoever to prove that the Chinese are selling below cost; they rely entirely on the allegation to pull their story’s freight.) Proponents rely on the economic ignorance of their audience. They paint an emotive picture of an economic world that functions like a giant Olympics. Each country is like a great big economic team, with its firms being the players. We are supposed to root for “our” firms, just as we root for our athletes in the Summer and Winter Olympics. After all, don’t those menacing firms threaten the jobs of “our” firms? Aren’t those jobs “ours?” Won’t that threaten “our” incomes, too?

This sports motif is way off base. U.S. producers and foreign producers have one thing in common – they both produce goods and services that we can consume, either now or in the future. And that gives them equal economic status as far as we are concerned. The ones “on our team” are the ones that produce the best products for our needs – period.

Wait a minute – what if the producers facing those low prices happen to be the ones employing us? Doesn’t that change the picture?

Yes, it does. In that case, we would be better off if our particular employer faced no foreign competition. But that doesn’t make a case for restricting or preventing foreign competition in general. Even people who lose their jobs owing to foreign competition faced by their employer may still gain more income from the lower prices brought by foreign competition in general than they lose by having to take another job at a lower income.

There’s another pertinent reason for not treating foreign firms as antagonistic to consumer interests. Foreign firms can, and do, locate in America and employ Americans to produce their products here. Years ago, Toyota was viewed as an interloper for daring to compete successfully with the “Big 3” U.S. automakers. Now the majority of Toyota automobiles sold in the U.S. are assembled on America soil in Toyota plants located here.

Predatory Pricing in International Markets

Dumping proponents have a last-ditch argument that they haul out when pressed with the behavioral contradictions stressed above. Sure, those foreign prices may be low now, import-competing producers warn darkly, but just wait until those devious foreigners succeed in driving all their competitors out of business. Then watch those prices zoom sky-high! The foreigners will have us in their monopoly clutches.

That loud groan you heard from the sidelines came from veteran economists, who would no sooner believe this than ask a zookeeper where to find the unicorns. The thesis summarized in the preceding paragraph is known as the “predatory pricing” hypothesis. The behavior was notoriously ascribed to John D. Rockefeller by the muckraking journalist Ida Tarbell. It was famously disproved by the research of economist John McGee. And ever since, economists have stopped taking the concept seriously even in the limited market context of a single country.

But when propounded in the global context of international trade, the whole idea becomes truly laughable. Steel is a worldwide industry because its uses are so varied and numerous. A firm that employed this strategy would have to sacrifice trillions of dollars in order to reduce all its global rivals to insolvency. This would take years. These staggering losses would be accounted in current outflows. They would be weighed against putative gains that would begin sometime in the uncertain future – a fact that would make any lender blanch at the prospect of financing the venture.

As if the concept weren’t already absurd, what makes it completely ridiculous is the fact that even if it succeeded, it would still fail. The assets of all those firms wouldn’t vaporize; they could be bought up cheaply and held against the day when prices rose again. Firms like the American steel company Nucor have demonstrated the possibility of compact and efficient production, so competition would be sure to emerge whenever monopoly became a real prospect.

The likelihood of any commercial steel firm undertaking a global predatory-pricing scheme is nil. At this point, opponents of foreign trade are, in poker parlance, reduced to “a chip and a chair” in the debate. So they go all in on their last hand of cards.

How Do We Defend Against Government-Subsidized Foreign Trade?

Jiming Zou, analyst at Moody’s Investor Service, is the designated spokesman of last resort in the article. “Many Chinese steelmakers are government-owned or closely linked to local governments [and] major state-owned steelmakers continue to have their loans rolled over or refinanced.”

Ordinary commercial firms might cavil at the prospect of predatory pricing, but a government can’t go broke. After all, it can always print money. Or, in the case of the Chinese government, it can always “manipulate the currency” – another charge leveled against the Chinese with tiresome frequency. “The weakening renminbi was also a factor in encouraging exports,” contributed another Chinese analyst quoted by the Journal.

One would think that a government with the awesome powers attributed to China’s wouldn’t have to retrench in all the ways mentioned in the article – reduce spending, lower interest rates, and cut subsidies to state-owned firms including steel producers. Zou is doubtless correct that “given their important role as employers and providers of tax revenue, the mills are unlikely to close or cut production even if running losses,” but that cuts both ways. How can mills “provide tax revenue” if they’re running huge losses indefinitely?

There is no actual evidence that the Chinese government is behaving in the manner alleged; the evidence is all the other way. Indeed, the only actual recipients of long-term government subsidies to firms operating internationally are creatures of government like Airbus and Boeing – firms that produce most or all of their output for purchase by government and are quasi-public in nature, anyway. But that doesn’t silence the protectionist chorus. Government-subsidized foreign competition is their hole card and they’re playing it for all it’s worth.

The ultimate answer to the question “how do we defend against government-subsidized foreign trade?” is: We don’t. There’s no need to. If a foreign government is dead set on subsidizing American consumption, the only thing to do is let them.

If the Chinese government is enabling below-cost production and sale by its firms, it must be doing it with money. There are only three ways it can get money: taxation, borrowing or money creation. Taxation bleeds Chinese consumers directly; money creation does it indirectly via inflation. Borrowing does it, too, when the bill comes due at repayment time. So foreign exports to America subsidized by the foreign government benefit American consumers at the expense of foreign consumers. No government in the world can subsidize the world’s largest consumer nation for long. But the only thing more foolish than doing it is wasting money trying to prevent it.

What Does “Trade Protection” Accomplish?

Textbooks in international economics spell out in meticulous detail – using either carefully drawn diagrams or differential and integral calculus – the adverse effects of tariffs and quotas on consumers. Generally speaking, tariffs have the same effects on consumers as taxes in general – they drive a wedge between the price paid by the consumer and received by the seller, provide revenue to the government and create a “deadweight loss” of value that accrues to nobody. Quotas are, if anything, even more deleterious. (The relative harm depends on circumstances too complex to enumerate.)

This leads to a painfully obvious question: If tariffs hurt consumers in the import-competing country, why in the world do we penalize alleged misbehavior by exporters by imposing tariffs? This is analogous to imposing a fine on a convicted burglar along with a permanent tax on the victimized homeowner.

Viewed in this light, trade protection seems downright crazy. And in purely economic terms, it is. But in terms of political economy, we have left a crucial factor out of our reckoning. What about the import-competing producers? In the Wall Street Journal article, these are the complainants at the bar of the International Trade Commission. They are also the people economists have been observing ever since the days of Adam Smith in the late 18th century, bellied up at the government-subsidy bar.

In Smith’s day, the economic philosophy of Mercantilism reigned supreme. Specie – that is, gold and silver – was considered the repository of real wealth. By sending more goods abroad via export than returned in the form of imports, a nation could produce a net inflow of specie payments – or so the conventional thinking ran. This philosophy made it natural to favor local producers and inconvenience foreigners.

Today, the raison d’etre of the modern state is to take money from people in general and give it to particular blocs to create voting constituencies. This creates a ready-made case for trade protection. So what if it reduces the real wealth of the country – the goods and services available for consumption? It increases electoral prospects of the politicians responsible and appears to increase the real wealth of the beneficiary blocs, which is sufficient to for legislative purposes.

This is corruption, pure and simple. The authors of the Journal article present this corrupt process with a straight face because their aim is to present cheap Chinese steel as a danger to the American people. Thus, their aims dovetail perfectly with the corrupt aims of government.

And this explains the front-page article on the 03/16/2015 Wall Street Journal. It reflects the news value of posing a danger where none exists – that is, the corruption of journalism – combined with the corruption of the political process.

The “Effective Rate of Protection”

No doubt the more temperate readers will object to the harshness of this language. Surely “corruption” is too harsh a word to apply to the actions of legislators. They have a great big government to run. They must try to be fair to everybody. If everybody is not happy with their efforts, that is only to be expected, isn’t it? That doesn’t mean that legislators aren’t trying to be fair, does it?

Consider the economic concept known as the effective rate of protection. It is unknown to the general public, but is appears in every textbook on international economics. It arises from the conjunction of two facts: first, that a majority of goods and services are composed of raw materials, intermediate goods and final-stage (consumer) goods; and second, that governments have an irresistible impulse to levy taxes on goods that travel across international borders.

To keep things starkly simple and promote basic understanding, take the simplest kind of numerical example. Assume the existence of a fictional textile company. It takes a raw material, cotton, and spin, weaves and processes that cotton into a cloth that it sells commercially to its final consumers. This consumer cloth competes with the product of domestic producers as well as with cotton cloth produced by foreign textile producers. We assume that the prevailing world price of each unit of cloth is $1.00. We assume further that domestic producers obtain one textile unit’s worth of cotton for $.50 and add a further $.50 worth of value to the cloth by spinning, weaving and processing it into the cloth.

We have a basic commodity being produced globally by multiple firms, indicated the presence of competitive conditions. But legislators, perhaps possessing some exalted concept of fairness denied to the rabble, decide to impose a tariff on the importation of cotton. Not wishing to appear excessive or injudicious, the solons set this ad valorem tariff at 15%. Given the competitive nature of the industry, this will soon elevate the domestic price of textiles above the world price by the amount of the tariff; e.g., by $.15, to $1.15. Meanwhile, there is no tariff levied on cotton, the raw material. (Perhaps cotton is grown domestically and not imported into the country or, alternatively, perhaps cotton growers lack the political clout enjoyed by textile producers.)

The insight gained from the effective rate of protection begins with the realization that the net income of producers in general derives from the value they add to any raw materials and/or intermediate products they utilize in the production process. Initially, textile producers added $.50 worth of value for every unit of cotton cloth they produced. Imposition of the tariff allows the domestic textile price to rise from $1.00 to $1.15, which causes textile producers’ value added to rise from $.50 to $.65.

Legislators judiciously and benevolently decided that the proper amount of “protection” to give domestic textile producers from foreign competition was 15%. They announced this finding amid fanfare and solemnity. But it is wrong. The tariff has the explicit purpose of “protecting” the domestic industry, of giving it leeway it would not otherwise get under the supposedly harsh and unrelenting regime of global competition. But this tariff does not give domestic producers 15% worth of protection. $15 divided by $.50 – that is, the increase in value added divided by the original value added – is .30, or 30%. The effective rate of protection is double the size of the “nominal” (statutory) level of protection. In general, think of the statutory tariff rate as the surface appearance and the effective rate as the underlying truth.

Like oh-so-many economic principles, the effective rate of protection is a relatively simple concept that can be illustrated with simple examples, but that rapidly becomes complex in reality. Two complications need mention. When tariffs are also levied on raw materials and/or intermediate products, this affects the relationship between the effective and nominal rate of protection. The rule of thumb is that higher tariff rates on raw materials and intermediate goods relative to tariffs on final goods tend to lower effective rates of protection on the final goods – and vice-versa.

The other complication is the percentage of total value added comprised by the raw materials and intermediate goods prior to, and subsequent to, imposition of the tariff. This is a particularly knotty problem because tariffs affect prices faced by buyers, which in turn affect purchases, which in turn can change that percentage. When tariffs on final products exceed those on raw materials and intermediate goods – and this has usually been the case in American history – an increase in this percentage will increase the effective rate.

But for our immediate purposes, it is sufficient to realize that appearance does not equal reality where tariff rates are concerned. And this is the smoking gun in our indictment of the motives of legislators who promote tariffs and restrictive foreign-trade legislation.

 

Corrupt Legislators and Self-Interested Reporting are the Real Danger to America

In the U.S., the Commercial Code includes thousands of tariffs of widely varying sizes. These not only allow legislators to pose as saviors of numerous business constituent classes. They also allow them to lie about the degree of protection being provided, the real locus of the benefits and the reasons behind them.

Legislators claim that the size of tariff protection being provided is modest, both in absolute and relative terms. This is a lie. Effective rates of protection are higher than they appear for the reasons explained above. They unceasingly claim that foreign competitors behave “unfairly.” This is also a lie, because there is no objective standard by which to judge fairness in this context – there is only the economic standard of efficiency. Legislators deliberately create bogus standards of fairness to give themselves the excuse to provide benefits to constituent blocs – benefits that take money from the rest of us. International trade bodies are created to further the ends of domestic governments in this ongoing deception.

Readers should ask themselves how many times they have read the term “effective rate of protection” in The Wall Street Journal, The Financial Times of London, Barron’s, Forbes or any of the major financial publications. That is an index of the honesty and reputability of financial journalism today. The term was nowhere to be found in the Journal piece of 03/16/2015.

Instead, the three Journal authors busied themselves flacking for a few American steel companies. They showed bar graphs of increasing Chinese steel production and steel exports. They criticized the Chinese because the country’s steel production has “yet to slow in lockstep” with growth in demand for steel. They quoted self-styled experts on China’s supposed “problem [with] hold[ing] down exports” – without every explaining what rule or standard or economic principle of logic would require a nation to withhold exports from willing buyers. They cited year-over-year increases in exports between January, 2013, 2014 and 2015 as evidence of China’s guilt, along with the fact that the Chinese were on pace to export more steel than any other country “in this century.”

The reporters quoted the whining of a U.S. steel vice-president that demonstrating damage from Chinese exports is just “too difficult” to satisfy trade commissioners. Not content with this, they threw in complaints by an Indian steel executive and South Koreans as well. They neglect to tell their readers that Chinese, Indian and South Korean steels tend to be lower grades – a datum that helps to explain their lower prices. U.S. and Japanese steels tend to be higher grade, and that helps to explain why companies like Nucor have been able to keep prices and profit margins high for years. The authors cite one layoff at U.S. steel but forget to cite the recent article in their own Wall Street Journal lauding the history of Nucor, which has never laid off an employee despite the pressure of Chinese competition.

That same article quoted complaints by steel buyers in this country about the “competitive disadvantage” imposed by the higher-priced U.S. steel. Why are the complaints about cheap Chinese exports front-page news while the complaints about high-priced American steel buried in back pages – and not even mentioned by a subsequent banner article boasting input by no fewer than three Journal reporters? Why did the reporters forget to cite the benefits accruing to American steel users from low prices for steel imports? Don’t these reporters read their own newspaper? Or do they report only what comports with their own agenda?

DRI-172 for week of 1-18-15: Consumer Behavior, Risk and Government Regulation

An Access Advertising EconBrief: 

Consumer Behavior, Risk and Government Regulation

The Obama administration has drenched the U.S. economy in a torrent of regulation. It is a mixture of new rules formulated by new regulatory bodies (such as the Consumer Financial Protection Bureau), new rules levied by old, preexisting federal agencies (such as those slapped on bank lending by the Federal Reserve) and old rules newly imposed or enforced with new stringency (such as those emanating from the Department of Transportation and bedeviling the trucking industry).

Some people within the business community are pleased by them, but it is fair to say that most are not. But the President and his subordinates have been unyielding in his insistence that they are not merely desirable but necessary to the health, well-being, vitality and economic growth of America.

Are the people affected by the regulations bad? Do the regulations make them good, or merely constrain their bad behavior? What entitles the particular people designing and implementing the regulations to perform in this capacity – is it their superior motivations or their superior knowledge? That is, are they better people or merely smarter people than those they regulate? The answer can’t be democratic election, since regulators are not elected directly. We are certainly entitled to ask why a President could possibly suppose that some people can effectively regulate an economy of over 300 million people. If they are merely better people, how do we know that their regulatory machinations will succeed, however well-intentioned they are? If they are merely smarter people, how do we know their actions will be directed toward the common good (whatever in the world that might be) and not toward their own betterment, to the exclusion of all else? Apparently, the President must select regulators who are both better people and smarter people than their constituents. Yet government regulators are typically plucked from comparative anonymity rather than from the firmament of public visibility.

Of all American research organizations, the Cato Institute has the longest history of examining government regulation. Recent Cato publications help rebut the longstanding presumptions in favor of regulation.

The FDA Graciously Unchains the American Consumer

In “The Rise of the Empowered Consumer” (Regulation, Winter 2014-2015, pp.34-41, Cato Institute), author Lewis A. Grossman recounts the Food and Drug Administration’s (FDA) policy evolution beginning in the mid-1960s. He notes that “Jane, a [hypothetical] typical consumer in 1966… had relatively few choices” across a wide range of food-products like “milk, cheese, bread and jam” because FDA’s “identity standards allowed little variation.” In other words, the government determined what kinds of products producers were allowed to legally produce and sell to consumers. “Food labels contained barely any useful information. There were no “Nutrition Facts” panels. The labeling of many foods did not even include a statement of ingredients. Nutrient content descriptors were rare; indeed, the FDA prohibited any reference whatever to cholesterol. Claims regarding foods’ usefulness in preventing disease were also virtually absent from labels; the FDA considered any such statement to render the product an unapproved – and thus illegal – drug.”

Younger readers will find the quoted passage startling; they have probably assumed that ingredient and nutrient-content labels were forced on sellers over their strenuous objections by noble and altruistic government regulators.

Similar constraints bound Jane should she have felt curiosity about vitamins, minerals or health supplements. The types and composition of such products were severely limited and their claims and advertising were even more severely limited by the FDA. Over-the-counter medications were equally limited – few in number and puny in their effectiveness against such infirmities as “seasonal allergies… acid indigestion…yeast infection[s] or severe diarrhea.” Her primary alternative for treatment was a doctor’s visit to obtain a prescription, which included directions for use but no further enlightening information about the therapeutic agent. Not only was there no Internet, copies of the Physicians’ Desk Reference were unavailable in bookstores. Advertising of prescription medicines was strictly forbidden by the FDA outside of professional publications like the Journal of the American Medical Association.

Food substances and drugs required FDA approval. The approval process might as well have been conducted in Los Alamos under FBI guard as far as Jane was concerned. Even terminally ill patients were hardly ever allowed access to experimental drugs and treatments.

From today’s perspective, it appears that the position of consumers vis-à-vis the federal government in these markets was that of a citizen in a totalitarian state. The government controlled production and sale; it controlled the flow of information; it even controlled the life-and-death choices of the citizenry, albeit with benevolent intent. (But what dictatorship – even the most savage in history – has failed to reaffirm the benevolence of its intentions?) What led to this situation in a country often advertised as the freest on earth?

In the late 19th and early 20th centuries, various incidents of alleged consumer fraud and the publicity given them by various muckraking authors led Progressive administrations led by Theodore Roosevelt, William Howard Taft and Woodrow Wilson to launch federal-government consumer regulation. The FDA was the flagship creation of this movement, the outcome of what Grossman called a “war against quackery.”

Students of regulation observe this common denominator. Behind every regulatory agency there is a regulatory movement; behind every movement there is an “origin story;” behind every story there are incidents of abuse. And upon investigation, these abuses invariably prove either false or wildly exaggerated. But even had they been meticulously documented, they would still not substantiate the claims made for them and not justify the regulatory actions taken in response.

Fraud was illegal throughout the 19th and 20th century and earlier. Competitive markets punish producers who fail to satisfy consumers by putting the producers out of business. Limiting the choices of producers and consumers harms consumers without providing compensating benefits. The only justification for FDA regulation of the type provided for the first half of the 20th century was that government regulators were omniscient, noble and efficient while consumers were dumbbells. That is putting it baldly but it is hardly an overstatement. After all, consider the situation that exists today.

Plentiful varieties of products exist for consumers to pick from. They exist because consumers want them to exist, not because the FDA decreed their existence. Over-the-counter medications are plentiful and effective. The FDA tries to regulate their uses, as it does for prescription medications, but thankfully doctors can choose from a plethora of “off-label” uses. Nutrient and ingredient labels inform the consumer’s quest to self-medicate such widespread ailments as Type II diabetes, which spread to near-epidemic status but is now being controlled thanks to rejection of the diet that the government promoted for decades and embrace of a diet that the government condemned as unsafe. Doctors and pharmacists discuss medications and supplements with patients and provide information about ingredients, side effects and drug interactions. And patients are finally rising in rebellion against the tyranny of FDA drug approval and the pretense of compassion exhibited by the agency’s “compassionate use” drug-approval policy for patients facing life-threatening diseases.

Grossman contrasts the totalitarian policies of yesteryear with the comparative freedom of today in polite academic language. “The FDA treated Jane’s… cohort…as passive, trusting and ignorant consumers. By comparison, [today’s consumer] has unmediated [Grossman means free] access to many more products and to much more information about those products. Moreover, modern consumers have acquired significant influence over the regulation of food and drugs and have generally exercised that influence in ways calculated to maximize their choice.”

Similarly, he explains the transition away from totalitarianism to today’s freedom in hedged terms. To be sure, the FDA gave up much of its power over producers and consumers kicking and screaming; consumers had to take all the things listed above rather than receive them as the gifts of a generous FDA. Nevertheless, Grossman insists that consumers’ distrust of the word “corporation” is so profound that they believe that the FDA exerts some sort of countervailing authority to ensure “the basic safety of products and the accuracy and completeness of labeling and advertising.” This concerning an agency that fought labeling and advertising tooth and claw! As to safety, Grossman makes the further caveat that consumers “prefer that government allow consumers to make their own decisions regarding what to put in their bodies…except in cases in which risk very clearly outweighs benefit” [emphasis added]. That implies that consumers believe that the FDA has some special competence to assess risks and benefits to individuals, which completely contradicts the principle that individuals should be free to make their own choices.

Since Grossman clearly treats consumer safety and risk as a special case of some sort, it is worth investigating this issue at special length. We do so below.

Government Regulation of Cigarette Smoking

For many years, individual cigarette smokers sued cigarette companies under the product-liability laws. They claimed that cigarettes “gave them cancer,” that the cigarette companies knew it and that consumers didn’t and that the companies were liable to selling dangerous products to the public.

The consumers got nowhere.

To this day, an urban legend persists that this run of legal success was owed to deep financial pockets and fancy legal footwork. That is nonsense. As the leading economic expert on risk (and the longtime cigarette controversy), W. Kip Viscusi, concluded in Smoke-Filled Rooms: A Postmortem on the Tobacco Deal, “the basic fact is that when cases reached the jury, the jurors consistently concluded that the risks of cigarettes were well-known and voluntarily incurred.”

In the early 1990s, all this changed. States sued the tobacco companies for medical costs incurred by government due to cigarette smoking. The suits never reached trial. The tobacco companies settled with four states; a Master Settlement Agreement applied to remaining states. The aggregate settlement amount was $243 billion, which in the days before the Great Recession, the Obama administration and the Bernanke Federal Reserve was a lot of money. (To be sure, a chunk of this money was gobbled up by legal fees; the usual product-liability portion is one-third of the settlement, but gag orders have hampered complete release of information on lawyers’ fees in these cases.)

However, the states were not satisfied with this product-liability bonanza. They increased existing excise taxes on cigarettes. In “Cigarette Taxes and Smoking,” Regulation (Winter 2014-2015, pp. 42-46, Cato Institute), authors Kevin Callison and Robert Kaestner ascribe these tax increases to “the hypothesis… that higher cigarette taxes save a substantial number of lives and reduce health-care costs by reducing smoking, [which] is central to the argument in support of regulatory control of cigarettes through higher cigarette taxes.”

Callison and Kaestner cite research from anti-smoking organizations and comments to the FDA that purport to find price elasticities of demand for cigarettes of between -0.3 and -0.7 percent, with the lower figure applying to adults and the higher to adolescents. (The words “lower” and “higher” refer to the absolute, not algebraic, value of the elasticities.) Price elasticity of demand is defined as the percentage change in quantity demanded associated with a 1 percent change in price. Thus, a 1% increase in price would cause quantity demanded to fall by between 0.3% and 0.7% according to these estimates.

The problem with these estimates is that they were based on research done decades ago, when smoking rates were much higher. The authors estimate that today’s smokers are mostly the young and the poorly educated. Their price elasticities are very, very low. Higher cigarette taxes have only a miniscule effect on consumption of cigarettes. They do not reduce smoking to any significant extent. Thus, they do not save on health-care costs.

They serve only to fatten the coffers of state governments. Cigarette taxes today play the role played by the infamous tax on salt levied by French kings before the French Revolution. When the tax goes up, the effective price paid by the consumer goes up. When consumption falls by a much smaller percentage than the price increase, tax revenues rise. Both the cigarette-tax increase of today and the salt-tax increases of the 17th and 18th century were big revenue-raisers.

In the 1990s, tobacco companies were excoriated as devils. Today, though, several of the lawyers who sued the tobacco companies are either in jail for fraud, under criminal accusation or dead under questionable circumstances. And the state governments who “regulate” the tobacco companies by taxing them are now revealed as merely in it for the money. They have no interest in discouraging smoking, since it would cut into their profits if smoking were to fall too much. State governments want smoking to remain price-inelastic so that they can continue to raise more revenue by raising taxes on cigarettes.

 

Can Good Intentions Really Be All That Bad? The Cost of Federal-Government Regulation

The old saying “You can’t blame me for trying” suggests that there is no harm in trying to make things better. The economic principle of opportunity cost reminds us that the use of resources for one purpose – in this case, the various ostensibly benevolent and beneficent purposes of regulation – denies the benefits of using them for something else. So how costly is that?

In “A Slow-Motion Collapse” (Regulation, Winter 2014-2015, pp. 12-15, Cato Institute), author Pierre Lemieux cites several studies that attempted to quantify the costs of government regulation. The most comprehensive of these was by academic economists John Dawson and John Seater, who used variations in the annual Code of Federal Regulations as their index for regulatory change. In 1949, the CFR had 19,335 pages; in 2005, this total has risen to 134,261 pages, a seven-fold increase in six-plus decades. (Remember, this includes federal regulation only, excluding state and local government regulation, which might triple that total.)

Naturally, proponents of regulation blandly assert that the growth of real income (also roughly seven-fold over the same period) requires larger government, hence more regulation, to keep pace. This nebulous generalization collapses upon close scrutiny. Freedom and free markets naturally result in more complex forms of goods, services and social interactions, but if regulatory constraints “keep pace” this will restrain the very benefits that freedom creates. The very purpose of freedom itself will be vitiated. We are back at square one, asking the question: What gives regulation the right and the competence to make that sort of decision?

Dawson and Seater developed an econometric model to estimate the size of the bite taken by regulation from economic growth. Their estimate was that it has reduced economic growth on average by about 2 percentage points per year. This is a huge reduction. If we were to apply it to the 2011 GDP, it would work as follows: Starting in 1949, had all subsequent regulation not happened, 2011 GDP would have been 39 trillion dollars higher, or about 54 trillion. As Lemieux put it: “The average American (man, woman and child) would now have about $125,000 more per year to spend, which amounts to more than three times [current] GDP per capita. If this is not an economic collapse, what is?”

Lemieux points out that, while this estimate may strain the credulity of some, it also may actually incorporate the effects of state and local regulation, even though the model itself did not include them in its index. That is because it is reasonable to expect a statistical correlation between the three forms of regulation. When federal regulation rises, it often does so in ways that require corresponding matching or complementary state and local actions. Thus, those forms of regulation are hidden in the model to some considerable degree.

Lemieux also points to Europe, where regulation is even more onerous than in the U.S. – and growth has been even more constipated. We can take this reasoning even further by bringing in the recent example of less-developed countries. The Asian Tigers experienced rapid growth when they espoused market-oriented economics; could their relative lack of regulation supplement this economic-development success story? India and mainland China turned their economies around when they turned away from socialism and Communism, respectively; regulation still hamstrings India while China is dichotomized into a relatively autonomous small-scale competitive sector and a heavily regulated and planned government controlled big-business economy. Signs point to a recent Chinese growth dip tied to the bursting of a bubble created by easy money and credit granted to the regulated sector.

The price tag for regulation is eye-popping. It is long past time to ask ourselves why we are stuck with this lemon.

Government Regulation as Wish-Fulfillment

For millennia, children have cultivated the dream fantasies of magical figures that make their wishes come true. These apparently satisfy a deep-seated longing for security and fulfillment. Freud referred to this need as “wish fulfillment.” Although Freudian psychology has long ago been discredited, the term retains its usefulness.

When we grow into adulthood, we do not shed our childish longings; they merely change form. In the 20th century, motion pictures became the dominant art form in the Western world because they served as fairy tales for adults by providing alternative versions of reality that were preferable to daily life.

When asked by pollsters to list or confirm the functions regulation should perform, citizens repeatedly compose “wish lists” that are either platitudes or, alternatively, duplicate the functions actually approximated by competitive markets. It seems even more significant that researchers and policymakers do exactly the same thing. Returning to Lewis Grossman’s evaluation of the public’s view of FDA: “Americans’ distrust of major institutions has led them to the following position: On the one hand, they believe the FDA has an important role to play in ensuring the basic safety of products and the accuracy and completeness of labeling and advertising. On the other hand, they generally do not want the FDA to inhibit the transmission of truthful information from manufacturers to consumers, and – except in cases in which risk very clearly outweighs benefit – they prefer that the government allow consumers to make their own decisions regarding what to put in their own bodies.”

This is a masterpiece of self-contradiction. Just exactly what is an “important role to play,” anyway? Allowing an agency that previously denied the right to label and advertise to play any role is playing with fire; it means that genuine consumer advocates have to fight a constant battle with the government to hold onto the territory they have won. If consumers really don’t want the FDA to “inhibit the transmission of truthful information from manufacturers to consumers,” they should abolish the FDA, because free markets do the job consumers want done by definitionand the laws alreadyprohibit fraud and deception.

The real whopper in Grossman’s summary is the caveat about risk and benefit. Government agencies in general and the FDA in particular have traditionally shunned cost/benefit and risk/benefit analysis like the plague; when they have attempted it they have done it badly. Just exactly who is going to decide when risk “very clearly” outweighs benefit in a regulatory context, then? Grossman, a professional policy analyst who should know better, is treating the FDA exactly as the general public does. He is assuming that a government agency is a wish-fulfillment entity that will do exactly what he wants done – or, in this case, what he claims the public wants done – rather than what it actually does.

Every member of the general public would scornfully deny that he or she believes in a man called Santa Claus who lives at the North Pole and flies around the world on Christmas Eve distributing presents to children. But for an apparent majority of the public, government in general and regulation in particular plays a similar role because people ascribe quasi-magical powers to them to fulfill psychological needs. For these people, it might be more apropos to view government as “Mommy” or “Daddy” because of the strength and dependent nature of the relationship.

Can Government Control Consumer Risk? The Emerging Scientific Answer: No 

The comments of Grossman, assorted researchers and countless other commentators and onlookers over the years imply that government regulation is supposed to act as a sort of stern, but benevolent parent, protecting us from our worst impulses by regulating the risks we take. This is reflected not only in cigarette taxes but also in the draconian warnings on the cigarette packages and in numerous other measures taken by regulators. Mandatory seat belt laws, adopted by state legislatures in 49 states since the mid-1980s at the urging of the federal government, promised the near-elimination of automobile fatalities. Government bureaucracies like Occupational Safety and Health Administration have covered the workplace with a raft of safety regulations. The Consumer Product Safety Commission presides with an eagle eye over the safety of the products that fill our market baskets.

In 1975, University of Chicago economist Sam Peltzman published a landmark study in the Journal of Political Economy. In it, Peltzman revealed that the various devices and measures mandated by government and introduced by the big auto companies in the 1960s had not actually produced statistically significant improvements in safety, as measured by auto fatalities and injuries. In particular, use of the new three-point seat belts seemed to show a slight improvement in driver fatalities that was more than offset by a rise in fatalities to others – pedestrians, cyclists and possibly occupants of victim vehicles. Over the years, subsequent research confirmed Peltzman’s results so repeatedly that former Chairman of the Council of Economic Advisors’ N. Gregory Mankiw dubbed this the “Peltzman Effect.”

A similar kind of result emerged throughout the social sciences. Innovations in safety continually failed to produce the kind of safety results that experts anticipated and predicted, often failing to provide any improved safety performance at all. It seems that people respond to improved safety by taking more risk, thwarting the expectations of the experts. Needless to say, this same logic applies also to rules passed by government to force people to behave more safely. People simply thwart the rules by finding ways to take risk outside the rules. When forced to wear seat belts, for example, they drive less carefully. Instead of endangering only themselves by going beltless, now they endanger others, too.

Today, this principle is well-established in scientific circles. It is called risk compensation. The idea that people strike to maintain, or “purchase,” a particular level of risk and hold it constant in the face of outside efforts to change it is called risk homeostasis.

These concepts make the entire project of government regulation of consumer risk absurd and counterproductive. Previously it was merely wrong in principle, an abuse of human freedom. Now it is also wrong in practice because it cannot possibly work.

Dropping the Façade: the Reality of Government Regulation

If the results of government regulation do not comport with its stated purposes, what are its actual purposes? Are the politicians, bureaucrats and employees who comprise the legislative and executive branches and the regulatory establishment really unconscious of the effects of regulation? No, for the most part the beneficiaries of regulation are all too cynically aware of the façade that covers it.

Politicians support regulation to court votes from the government-dependent segment of the voting public and to avoid being pilloried as killers and haters or – worst of all – a “tool of the big corporations.” Bureaucrats tacitly do the bidding of politicians in their role as administrators. In return, politicians do the bidding of bureaucrats by increasing their budgets and staffs. Employees vote for politicians who support regulation; in return, politicians vote to increase budgets. Employees follow the orders of bureaucrats; in return, bureaucrats hire bigger staffs that earn them bigger salaries.

This self-reinforcing and self-supporting network constitutes the metastatic cancer of big government. The purpose of regulation is not to benefit the public. It is to milk the public for the benefit of politicians, bureaucrats and government employees. Regulation drains resources away from and hamstrings the productive private economy.

Even now, as we speak, this process – aided, abetted and drastically accelerated by rapid money creation – is bringing down the economies of the Western world around our ears by simultaneously wreaking havoc on the monetary order with easy money, burdening the financial sector with debt and eviscerating the real economy with regulations that steadily erode its productive potential.

DRI-275 for week of 9-28-14: Touchdown-Celebration Prayer: Time for Separation of Church and Red Zone?

An Access Advertising EconBrief:

Touchdown-Celebration Prayer: Time for Separation of Church and Red Zone?

Fans of the National Football League (NFL) have become inured to the spectacle of celebrations conducted by players who score a touchdown. These actions have assumed a variety of forms, ranging from ordinary excesses of joy and enthusiasm like jumping up and down to esoteric rituals like spiking or dunking the football over the goalpost. Perhaps the most common form is some sort of gyration or celebratory dance. The practice originated among certain players whose fame depended at least as much on their self-promotional zeal as upon their athletic prowess – Deion Sanders, formerly of the Dallas Cowboys, comes particularly to mind.

Older readers will appreciate the striking contrast between this modern attitude and that exhibited by legendary stars of yesteryear like Jim Brown of the Cleveland Browns and Johnny Unitas of the Baltimore Colts. Brown, who may have been the greatest running back of all time, was slow to assume his stance prior to the center snap of the football and even slower to rise after being tackled when running the ball. His demeanor was impassive. He conserved his energy and saved his exertions for the time between the snap and the referee’s whistle signaling the end of a play. Did this account for the fact that his average-yards-gained per carry was the highest of any Hall of Fame runner?

Unitas was similarly deadpan on the field. As quarterback for the Colts, he terrified opponents and awed teammates with the knack for leading his team from behind in the closing seconds of a game. But fans could never have guessed by looking at him whether he had just been sacked for a loss or thrown the winning touchdown pass as time expired. If any of his teammates had ever done anything as gauche as celebrating a long run or spectacular catch, they would have been frozen solid by the icy stare known throughout the NFL as the “Unitas look.”

In the so-called “greatest football game ever played” – the 1958 NFL championship game between the Baltimore Colts and the New York Giants – Unitas provided the prelude to victory by completing a daring sideline pass to tight end Jim Mutcheller in the Giants’ one-yard line in sudden-death overtime. At the post-game press conference, a reporter ventured to question Unitas’s play-calling decision: “That was a pretty dangerous pass, wasn’t it? What if it had been intercepted?” The reporter was the first televised victim of “the look.” “When you know what you’re doing,” Unitas replied without needing to raise his voice, “they’re not intercepted.”

Nowadays many players feel obligated to supplement the audio and visual record of play supplied by television by advertising what has just happened. The newest wrinkle on this style of irrepressible self-expression is praying in the end zone after scoring a touchdown.

The Abdullah Case and Ensuing Fallout

In the fourth quarter of a game between the Kansas City Chief and New England Patriots at Arrowhead Stadium on September 29, 2014, New England quarterback Tom Brady completed a pass to Kansas City safety Husein Abdullah. Abdullah traversed the 39 yards to the New England end zone, where he dropped to his knees in prayer.

End-zone touchdown celebrations are now so commonplace that rules have been drafted to cover them. One of those rules forbids celebrating while “on the ground.” The referees invoked this rule, penalizing the Chiefs 15 yards on the ensuing kickoff for “unsportsmanlike conduct.”

That did not end the matter, though. Two days later, the NFL’s league office announced that the official decision had been in error. Why? It seems that “there are exceptions made for religious expressions,” according to NFL vice-president for football communications Michael Signora. But the referees may have been confused by Abdullah’s body language; he slid on his knees rather than simply kneeling down. Probably sensing an opportune moment, the well-known organization CAIR (Council on American-Islamic Relations) lodged an objection to the original ruling. According to an article in the Kansas City Star (“NFL Admitting Error on Abdullah Flag,” October 1, 2014, by Tod Palmer), “Abdullah is a devout Muslim.” The CAIR spokesman urged the league office to “clarify the policy” so as to “avoid the appearance of a double standard” for Muslims and non-Muslims.

The sensitivities of Americans have been abraded by over a half-century of controversy over the separation of church and state. Now the debate over public religious observance has invaded the football field or, more specifically, the end zone. Will theologians have to be on call for replay decisions by officials? Should the NFL nail a thesis on the separation of church and red zone to the main gate of its stadiums? Is all this really necessary?

The Economics of Player Celebration 

Does associating end-zone prayer with celebration seem odd? Abdullah himself referred to his action as “prostrat[ing] myself to God.” Still, the religious faithful at their devotions are often called “celebrants.” In any case, the attributes of prayer and those of celebration are virtually identical in this particular context, which allows us to apply economic principles to both types of action. Both interrupt the normal flow of play and divert attention away from the game and to the celebrant. A case exists that each kind of action might either please or annoy a football fan.

One interesting thing about this example is the diametric tacks taken by the economist and the non-economist. The non-economist feels compelled to ascertain whether prayer itself is “good” or “bad.” A particularly discriminating non-economist might put that to one side and focus on whether or not prayer is a good thing in this particular context; e.g., on a football field with hundreds of millions of spectators. The economist may or may not feel qualified to supply answers to those questions, but does not care about the answers because they needn’t be answered by any particular individual. Markets exist to answer questions that individuals cannot or should not answer. 

Professional football is an intangible product supplied by the National Football League and its member franchises (teams) to consumers (fans). That product consists primarily, but not solely, of competitive athletic performance. A rhetorical question posed previously in this space asked: If O. J. Simpson were still in full flower of his athletic skills, would he be working as a running back in the NFL, all other things equal? The obvious answer is no, because football fans do not want to watch murderers play professional football, no matter how talented they may be.

The advent of touchdown celebration allows us to add another qualifying example to our definition of the pro-football product. To the degree that some fans enjoy and even encourage end-zone celebrations, it is clear that they derive satisfaction (or utility, in economic jargon) from this practice. That means that the pro-football product is defined as “competitive athletic performance plus entertainment.”

This is not merely an ad hoc formulation cobbled together by an economist for a column. In the same edition of the same Sports section of the Kansas City Star as the story of the NFL’s recantation of the penalty on Abdullah, the adjacent story is a profile of Chiefs’ cornerback Sean Smith. Study Smith’s comments about his flamboyant style of play and the attitude of Chiefs’ coaches to the on-field exhibition of his personality.

“‘I think (the Miami game) gave the coaches a chance to see that when I’m able to go out there and just be myself and let my personality hang out there, not only do I play well, but people feed off my energy,’ Smith said.” [Quoting reporter Terez A. Paylor] “‘Smith, like his other more animated teammates, appreciates Coach Andy Reid’s philosophy. He encourages his players to play with passion and let their personalities shine through on the field, and Smith has embraced that approach this season.'”[Back to Smith again] “‘Coach emphasizes to let your personality show, go out there and cut loose, and be yourself and have fun…That’s something I definitely took personal. I’ve been a very enthusiastic guy. I like going out there and having fun and putting a smile on people’s faces.'”

This constitutes an implicit endorsement by a player and head coach, as cited by a beat reporter, of the economic model developed above.

Does this mean that end-zone celebrations are a good thing? Does it mean that players have a right to indulge them? Does it justify the NFL’s policy? Or condemn it? The answers to these questions are various forms of “no.” End-zone celebrations are one more input into the productive process, no better or worse a priori than any other. They may or may not be appropriate. Players have no “right” to indulge in them because players do not control the production process – the team does. The NFL is the franchisor; it has the right to control end-zone celebrations only if they affect its ability to provide the right competitive environment for the teams and not when only team profitability is at stake.

A last key question may be the one most frequently asked when this issue arises in public controversy. What about the player’s “right” of free religious observance?

Why Freedom of Religion Does Not Guarantee the Right to Celebrate in the End Zone 

Freedom is defined as the absence of external constraint. It does not guarantee the power to achieve one’s aims over opposition; in particular, it does not confer rights. A right can be enjoyed only when it does not abrogate the exercise of somebody else’s right. A contract is a voluntary agreement that imposes legal duties on both (all) parties to it.

These definitions lay the groundwork for our understanding of prayer in the end zone.

Husein Abdullah is an employee of the Kansas City Chiefs football team. He helps produce professional football entertainment but he does not control the mix of inputs into that product. The team decides who the other players will be, what style of football the team will play, what offensive plays the team will run, what defensive sets the team will employ, who the coaches, assistant coaches and trainers will be. If the team chooses all these inputs into the production of professional football entertainment, why should it not also control the nature of end-zone celebrations? Of course, the team may opt for spontaneity by giving free rein to players’ imaginations, just as conventional entertainers in show business may opt for improvisation over a scripted performance. Still, the team will almost certainly forbid players from celebrating by making obscene gestures to opposing players, revealing intimate body parts to fans and performing other acts virtually guaranteed to offend fans rather than entertaining them.

So we should hardly be astonished if the team should choose to regulate an action as potentially sensitive or embarrassing as an act of religious observance – should we? And, speaking as students of economic logic, we can make no objection to that – can we?

How about Husein Abdullah? Or, for that matter, any religious celebrant of any religious denomination? Is he being treated unfairly? Are his rights being violated?

No. As an employee of the team, Abdullah works at the direction of the team and for its benefit. The fact that Abdullah is engaging in a religious observance in this particular case is irrelevant. Abdullah certainly has freedom of religion. He has freedom of speech, too, but that doesn’t give him the right to say anything and everything under the sun in his capacity as an employee with no fear of repercussion.

Suppose Abdullah were an employee working in an office building. Does he have the “right” to pray at the top of his lungs while wandering around and between the desks of his fellow employees? No, he has no right to disrupt the workplace in this fashion even with the excuse that freedom of religion allows him the right of religious observance. Similarly, his “right” to pray in the end zone is circumscribed by team policy.

Does this mean that the Abdullahs of the world are inevitably booked for disappointment in their longing to prostrate themselves before God in the end zone? There is no reason to think so. We know, for instance, that celebrations were once frowned upon and suppressed yet are now practically de rigeur. There seems no way to predict what twists and turns this penchant for celebration will take because there is no way to predict how the tastes of the public will change.

Are we afraid that “discrimination” against unpopular minority groups (Muslims, for example) will proliferate? No, we are not, because in this context the term discrimination loses its familiar colloquial meaning. There is no arbitrary exercise of power against a group because no business has a duty to employ all inputs to an equal degree. Instead, businesses have a duty to their owners and consumers to employ inputs based on productivity precisely by discriminating in favor of the more productive and against the less productive. Whether the inputs are engaging in religious observance, speech or any other activity does not matter. If a player can produce a productive form of celebration, this will make money for his team and provide the player with a celebratory meal ticket. If not, the player will lose the privilege of celebrating in the end zone. Business is not about what the boss wants or what employees want – it is about what consumers want. Economists characterize this principle as consumer sovereignty.

If a player demands a right to pray in the end zone, what he is really demanding is not freedom, nor is an exercise of a valid right. Rather, it is the power to abrogate his duty to his employer at whim. As often emphasized in this space, this confusion of freedom and power suffered by the general public has been repeatedly exploited to political advantage by the left wing.

The Absurd Position in Which the NFL Finds Itself

The framework for analysis outlined above is simple and logical. It is an outgrowth of the system by which we divide labor to produce and exchange goods and services. The pellucid clarity of this system stands out in brilliant contrast to the existing framework under which the NFL currently operates.

The NFL currently has rules governing player celebrations. These rules are part of the code that governs play on the field. Violations are punished with penalties such as the one Abdullah earned for the Chiefs. Consequently, the rules must be mastered, interpreted and applied by the referees. Inevitably, as with all sports decisions made by referees or umpires, subjective perceptions and interpretations cause mistakes and controversy. (The distinction between kneeling and sliding to his knees probably reminded Abdullah of the judging on Dancing With the Stars.) Meanwhile, the entities whose interests are most directly affected – team ownership and management – must sit back and await the chance to appeal any wrongful decision later.

And the fans – the people for whose benefit the system operates – don’t get any direct say in this administrative process. Whereas in a competitive market, input from fans directly determines the nature and extent of player celebrations, the regulated market gives immediate control to the administrative mechanism of the NFL. This allows the entertainment part of the product to contaminate the competitive part when penalties are levied for unsportsmanlike conduct, whereas under a competitive system the team handles problems of unsuitable celebration outside of the context of the competitive contest.

That’s not all to object to about top-down regulation of end zone celebration by the NFL. In fact, it may not even be the worst. The Abdullah case illustrates the political hazards of the top-down approach. The NFL began by wanting to suppress inappropriate celebration, which is surely not objectionable in and of itself. By doing the regulating itself instead of leaving it to the market, the NFL left itself open to the pressures of every special interest with an ax to grind. Because the NFL has no special interest in the profits of any one team, it has no incentive to favor popular celebration. Because the NFL is a bureaucratic organization, it is open to influence by every special interest with an ax to grind, CAIR being the most recent to step up to the grinder.

Suddenly, the NFL finds it can’t simply ban a form of celebration it doesn’t approve of (by “any player on the ground”) because that would run afoul of “religious observance.” Imagine – religious observance interfering with the conduct of a football game, when previously the only thing the two had in common was Sunday. And the minute the NFL starts making an exception for “religious observance,” it then has to confront the issue of different – and conflicting – religions. Wonderful – the two things attendees at a dinner party are never supposed to mention are politics and religion, and both are now elbowing their way into the end zone. What next? Will Stars of David start popping up on player helmets as an expression of their “right of free speech?” If only the fans had the power to throw a flag against the NFL for interference!

The General Principle at Work Here 

Americans have forgotten the value of allowing markets to decide basic questions. A recent Wall Street Journal op-ed commented offhandedly that we have lost confidence in free markets as a result of the Great Recession. If so, this is a monumental irony, since that event was caused by the interference with and subordination of the market process. It is not clear how much of the current attitude originates with a loss of faith and how much with simple ignorance. Regardless of the source, we must reverse this attitude to have any hope of survival, let alone prosperity. We know markets work because the world in general and the U.S. in particular would never have reached their present state of prosperity unless markets were as effective as free-market economists claim they are. The pretense that regulated, administrative markets are a vehicle for perfect “social justice” is not merely a sham – it is a recipe for tyranny. Administrators possess neither the comprehensive information nor the omniscient sense of fairness necessary to decide whose celebrations to allow, which ones to ban and what standard to apply to all.

The best thing about the example of touchdown celebrations is that they provide a side-by-side illustration of free markets and regulated administrative markets. The free market is player celebrations as they evolved in recent years, encouraged by fan response and governed by individual teams. The Kansas City Star excerpts show in so many words that this market exists and the evidence of our senses shows that this market works just as economic logic predicts that it will. And our ever-more-dismal experience with top-down, bureaucratic NFL regulation shows that rule by fiat and by ventriloquists in the chattering classes is an escalating failure.

What about the older fans who are appalled by player celebrations and long for the good old days of strong, silent, heroic players like Brown and Unitas? Why, we’ll just have to find a team that suits our tastes – or found one.

DRI-291 for week of 7-27-14: How to Debate Bill Moyers

An Access Advertising EconBrief:

How to Debate Bill Moyers

In the course of memorializing a fellow economist who died young, Milton Friedman observed that “we are all of us teachers.” He meant the word in more than the academic sense. Even those economists who live and work outside the academy are still required to inculcate economic fundamentals in their audience. The general public knows less about economics than a pig knows about Sunday – a metaphor justly borrowed from Harry Truman, whose opinion of economists was famously low.

Successful teachers quickly sense that they have entered their persuasive skills into a rhetorical contest with the students’ inborn resistance to learning. Economists face the added handicap that most people overrate their own understanding of the subject matter and are reluctant to jettison the emotional baggage that hinders their absorption of economic logic.

All this puts an economist behind the eight-ball as educator. But in public debate, economists usually find themselves frozen against the rail as well (to continue the analogy with pocket billiards). The most recent case of this competitive disadvantage was the appearance by Arthur C. Brooks, titular head of the conservative American Enterprise Institute (AEI), on the PBS interview program hosted by longtime network fixture Bill Moyers.

Brooks vs. Moyers: An Unequal Contest

At first blush, one might consider the pairing of Brooks, seasoned academic, Ph D. and author of ten books, with Moyers, onetime divinity student and ordained minister who left the ministry for life in politics and journalism, to be an unequal contest. And so it was. Brooks spent the program figuratively groping for a handhold on his opponent while Moyers railed against Brooks with abandon. It seemed clear that each had different objectives. Moyers was insistent on painting conservatives (directly) and Brooks (indirectly) as insensitive, unfeeling and uncaring, while Brooks seemed content that he understood the defensive counterarguments he made in his behalf, even if nobody else did.

Moyers never lost sight of the fact that he was performing to an audience whose emotional buttons he knew from memory and long experience. Brooks was speaking to a critic in his own head rather than playing to an alien house whose sympathies were presumptively hostile.

To watch with a rooting interest in Brooks’ side of the debate was to risk death from utter frustration. In this case, the only balm of Gilead lies in restaging Brooks’ reactions to Moyers’ sallies. This should amount to a debater’s handbook for economists in dealing with the populists of the hard political left wing.

Who is Bill Moyers?

It is important for any debater to know his opponent going into the debate. Moyers is careful to put up a front as an honest broker in ideas. Brooks’ appearance on Moyers’ show is headlined as “Arthur C. Brooks: The Conscience of a Compassionate Conservative,” as if to suggest that Moyers is giving equal time in good faith to an ideological opponent.

This is sham and pretense. Bill Moyers is a professional hack who has spent his whole life in the service of the political left wing. While in his teens, he became a political intern to Texas Senator Lyndon Johnson. After acquiring a B.A. degree in journalism from the University of Texas at Austin, Moyers got an M.A. from the Southwest Baptist Theological Seminary in Fort Worth, Texas. After ordination, he forsook the ministry for a career in journalism and left-wing politics, two careers that have proved largely indistinguishable for over the last few decades. He served in the Peace Corps from 1961-63 before joining the Johnson Administration, serving as LBJ’s Press Secretary from 1965-67. He performed various dirty tricks under Johnson’s direction, including spearheading an FBI investigation of Goldwater campaign aides to uncover usable dirt for the 1964 Presidential campaign. (Apparently, only one traffic violation and one illicit love affair were unearthed among the fifteen staffers.) A personal rift with Johnson led to his resignation in 1967. Moyers edited the Long Island publication Newsday for three years and he alternated between broadcast journalism (PBS, CBS, back to PBS) and documentary-film production thereafter until his elevation to the presidency of the SchumanCenter for Media and Democracy in 1990. Now 80 years old, he occupies a position best described as “political-hack emeritus.”

With this resume under his belt, Moyers cannot maintain any pretense as an honest broker in ideas, his many awards and honorary degrees notwithstanding. After all, the work of America’s leading investigative reporters, James Steele and Donald Barlett, has been exposed in this space as shockingly inept and politically tendentious. Journalists are little more than political advocates and Bill Moyers has thrived in this climate.

In the 1954 movie Night People, Army military intelligence officer Gregory Peck enlightens American politician Broderick Crawford about the true nature of the East German Communists who have kidnapped Crawford’s son. “These are cannibals…bloodthirsty cannibals who are trying to eat us up,” Peck insists. That describes Bill Moyers and his ilk, who are among those aptly characterized by F.A. Hayek as the “totalitarians in our midst.”

This is the light in which Arthur Brooks should have viewed his debate with Bill Moyers. Unfortunately, Brooks seemed stuck in defensive mode. His emphasis on “conscience” and “compassion” seemed designed to stress that he had a conscience – why leave the inference that this was in doubt? – and that he was a compassionate conservative – as opposed to what other kind, exactly? Thus, he began by giving hostages to the enemy before even sitting down to debate.

Brooks spent the interview crouched in this posture of defense, thereby guaranteeing that he would lose the debate even if he won the argument.

Moyers’ Talking Points – and What Brooks Should Have Said

Moyers’ overall position can be summarized in terms of what the great black Thomas Sowell has called “volitional economics.” The people Moyers disapproves of – that is, right-wingers and owners of corporations – have bad intentions and are, ipso facto, responsible for the ills and bad outcomes of the world.

Moyers: “Workers at Target, McDonald’s and Wal-Mart need food stamps to survive…Wal-Mart pays their workers so little that their average worker depends on $4,000 per year in government subsidies.”

Brooks: “Well, we could pay them a higher minimum wage – then they would be unemployed and be completely on the public dole…”

Moyers: “Because the owners of Wal Mart would not want to pay them that higher minimum wage [emphasis added].

 

WHAT BROOKS SHOULD HAVE SAID: “Wait a minute. Did you just say that the minimum wage causes higher unemployment because business owners don’t want to pay it? Is that right? [Don’t go on until he agrees.] So if the business owners just went ahead and paid all their low-skilled employees the higher minimum wage instead of laying off some of them, everything would be fine, right? That’s what your position is? [Make him agree.]

Well, then – WHY DON’T YOU DO IT? WHY DON’T YOU – BILL MOYERS – GO BUY A MCDONALD’S FRANCHISE AND PAY EVERY LOW-SKILLED EMPLOYEE CURRENTLY MAKING THE MINIMUM WAGE AND EVERY NEW HIRE THE HIGHER MINIMUM WAGE YOU ADVOCATE. SHOW US ALL HOW IT’S DONE. DON’T JUST CLAIM THAT I’M WRONG – PROVE IT FOR ALL THE WORLD TO SEE. THEN YOU CAN HAVE THE LAUGH ON ME AND ALL MY RIGHT-WING FRIENDS.

[When he finishes sputtering:] You aren’t going to do it, are you? You certainly can’t claim that Bill Moyers doesn’t have the money to buy a franchise and hire a manager to run it. And you certainly can’t claim that the left-wing millionaires and billionaires of the world don’t have the money -not with Tom Steyer spending a hundred million dollars advertising climate change. The minimum wage has been in force since the 1930s and the left-wing has been singing its praises for my whole life – but when push comes to shove the left-wing businessmen pay the same wages as the right-wing businessmen. Why? Because they don’t want to go broke, that’s why.

WHY IT IS IMPORTANT TO SAY THIS: The audience for Bill Moyers’ program consists mainly of people who agree with Bill Moyers; that is, of economic illiterates who do their reasoning with their gall bladders. It is useless to use formal economic logic on them because they are impervious to it. It is futile to cite studies on the minimum wage because the only studies they care about are the recent ones – dubious in the extreme – that claim to prove the minimum wage has only small adverse effects on employment.

The objective with these people is roughly the same as with Moyers himself: take them out of their comfort zone. There is no way they can fail to understand the idea of doing what Moyers himself advocates because it is what they themselves claim to want. All Brooks would be saying is: Put your money where your mouth is. This is the great all-purpose American rebuttal. And he would be challenging people known to have money, not the poor and downtrodden.

This is the most straightforward, concrete, down-to-earth argument. There is no way to counter it or reply to it. Instead of leaving Brooks at best even with Moyers in a “he-said, he-said” sort of swearing contest, it would have left him on top of the argument with his foot on Moyers’ throat, looking down. At most, Moyers could have limply responded with, “Well, I might just do that,” or some such evasion.

Moyers: “Just pay your workers more… [But] instead of paying a living wage… [owners] do stock buy-backs…”

Brooks: [ignores the opportunity].

WHAT BROOKS SHOULD HAVE SAID: “Did you just use the phrase ‘LIVING WAGE,’ Mr. Moyers? Would you please explain just exactly what a LIVING WAGE is? [From here on, the precise language will depend on the exact nature of his response, but the general rebuttal will follow the same pattern as below.] Is this LIVING WAGE a BIOLOGICAL LIVING WAGE? I mean, will workers DIE if they don’t receive it? But they don’t have it NOW, right? And they’re NOT dying, right? So the term as you use it HAS NOTHING TO DO WITH LIVING OR DYING, does it? It’s just a colorful term that you use because you hope it will persuade people to agree with you by getting them to feel sorry for workers, isn’t it?

There are over 170 countries in the world, Mr. Moyers. In almost all of those countries, low-skilled workers work for lower wages than they do here in the United States. Did you know that? In many countries, low-skilled workers earn the equivalent of less than $1,000 per year in U.S. dollars. In a few countries, they earn just a few hundred dollars worth of dollar-equivalent wages per year. PER YEAR, Mr. Moyers. For you to sit here and use the term “LIVING WAGE” for a wage THIRTY TO FIFTY TIMES HIGHER THAN THE WAGE THEY EARN IS POSITIVELY OBSCENE. Don’t you agree, MR. MOYERS? They don’t die either – BUT I BET THEY GET PRETTY HUNGRY SOMETIMES. What do you bet – MR. MOYERS?

WHY IT IS IMPORTANT TO SAY THIS: The phrase “living wage” has been a left-wing catch-phrase longer than most people today have been alive. Its use is “free” because users are never challenged to explain or defend it. It sounds good because it has a nice ring of urgency and necessity to it. But upon close examination it disintegrates like toilet tissue in a bowl. There is no such wage as a wage necessary to sustain life in the biological sense. For one thing, it would vary across a fairly wide range depending on various factors ranging from climate to gender to race to nutrition to prices to wealth to…well, the factors are numerous. It would also be a function of time. Occasionally, classical economists like David Ricardo and Karl Marx would broach the issue, but they never answered any of the basic questions; they just assumed them away in the time-honored manner of economists everywhere. For them, any concept of a living wage was pure theoretical or algebraic, not concrete or numerical. Today, for the left wing, the living wage is purely a polemical concept with zero concreteness. It is merely a club to beat the right wing with. It is without real-world significance or content.

Given this, it is madness to allow your debate opponent the use of this club. Take the club away from him and use it against him.

Bill Moyers: “Wal Mart, which earned $17 billion in profit last year…”

Arthur Brooks: [gives no sign of noticing or caring].

WHAT ARTHUR BROOKS SHOULD HAVE SAID: “You just said that Wal Mart earned $17 billion in profit last year. You did say that, didn’t you – I don’t want to be accused of misquoting you. Does that seem like a lot of money to you? [He will respond affirmatively.] Why? Is it a record of some kind? Did somebody tell you it was a lot of money? Or does it just sort of sound like a lot? I’m asking this because you seem to think that sum of money has a lot of significance, as though it were a crime, or a sin, or special in some way. You seem to think it justifies special notice on your part. You seem to think it justifies your demanding that Wal Mart pay higher wages to their workers than they’re doing now. And my question is: WHY? Unless my ears deceive me, you seem to be making these claims on the basis of the PURE SIZE of the amount. You think Wal Mart should “give” some of this money to its low-skilled workers – is that right? [He will agree enthusiastically.]

OK then. Here’s what I think: WHY DON’T YOU, MR. MOYERS? [He will pretend not to understand.] I MEAN EXACTLY WHAT I SAID. WHY DON’T YOU DO IT, MR. MOYERS, IF THAT’S WHAT YOU BELIEVE? [He will smile or laugh: “Because I’m not Wal Mart, that’s why.] BUT YOU ARE, MR. MOYERS. OR YOU CAN BE. ANYBODY CAN BE. FOR THAT MATTER, THOSE WAL-MART WORKERS WHOSE WELFARE YOU CLAIM TO CARE FOR SO MUCH CAN BE, TOO. ALL YOU HAVE TO DO IS BUY WAL-MART STOCK. IT TRADES PUBLICLY, YOU KNOW.

IF YOU THINK WAL- MART SHOULD GIVE ITS MONEY AWAY, THEN BUY WAL-MART STOCK, TAKE THE IVIDENDS YOU PAY YOU AND GIVE THE MONEY AWAY TO WHEREEVER YOU THINK IT SHOULD GO. AFTER ALL, ONCE YOU BUY WAL MART STOCK…NOW YOU’RE WAL-MART. YOU OWN THE COMPANY. AT LEAST, YOU OWN A FRACTION OF IT, JUST LIKE ALL THE OTHER OWNERS OF WAL-MART DO. YOU WANT WAL MART TO GIVE ITS PROFITS AWAY? OK, GIVE THEM AWAY YOURSELF. WHY SHOLD THE GOVERNMENT WASTE MILLIONS OF DOLLARS IN BUREAUCRATIC OVERHEAD IN ACCOMPLISHING SOMETHING THAT YOU CAN ACCOMPLISH CHEAP FOR THE COST OF A DISCOUNT BROKERAGE COMMISSION?

And you can deduct it from your income tax as a charitable contribution…MR. MOYERS.

As far as that’s concerned, as a matter of logic, if Wal-Mart’s workers really agree with you that Wal-Mart is scrooging away in profits the money that should go to them in wages, then the workers could do the same thing, couldn’t they? They could buy Wal-Mart’s stock and earn that share of the profit that you want the company to give them. It’s no good claiming they don’t have the money to do it because they’d not only be getting a share of these profits you say are so fabulous, they’d also be owning the company that you’re claiming is such a super profit machine that it’s got profits to burn – or give away. If what you say is really true, you should be screaming at Wal-Mart’s workers to buy shares instead of wasting time trying to get the government to take money away from Wal-Mart so some of it can trickle down to the workers.

Of course, that’s the catch. I don’t even know if YOU YOURSELF BELIEVE THE BALONEY YOU’VE BEEN SPREADING AROUND IN THIS INTERVIEW. I don’t think you even know the truth about all three of those companies that you claim are so flush with profits. To varying degrees, they’re actually in trouble, MR. MOYERS. It’s all in the financial press, MR MOYERS – which you apparently haven’t read and don’t care to read. McDonald’s has had to reinvent itself to recover its sales. Wal-Mart is floundering. Target has lost touch with its core customers. And the $17 billion that seems like so much profit to you doesn’t constitute such a great rate of return when you spread it over the hundreds of thousands of individual Wal Mart shareholders – as you’re about to find out when you take my advice to put your money where you great big mouth is – MR. MOYERS.

WHY IT IS IMPORTANT TO SAY THIS: The mainstream press has been minting headlines out of absolute corporate profits for decades. The most prominent victim of this has been the oil companies because they have been the biggest private companies in the world. Any competent economist knows that it is the rate of return that reveals true profitability, not the absolute size of profits. Incredibly, this fact has not permeated to the public consciousness despite the popularity of 401k retirement-investment accounts.

Buying Wal-Mart stock is just another way of implementing the “put your money where your mouth is” strategy discussed earlier. If Bill Moyers’ view of the company were correct – which it isn’t, of course – it would make much more sense than redistributing money via other forms of government coercion.

The Goal of Debate

If you play poker and nobody ever calls your bluff, it will pay you to bluff on the slightest excuse. In debate, you have to call your debate opponent’s bluffs; otherwise, you will be bluffed down to your underwear even when your opponent isn’t holding any cards. Arthur Brooks was just as conservative in his debating style as in his ideology – he refused to call even Moyers’ most ridiculous bluffs. This guaranteed that the best outcome he could hope for was a draw even if his performance was otherwise flawless. It wasn’t, so he came off poorly.

Of course, he was never going to “win” the debate in the sense of persuading hard-core leftists to convert to a right-wing position. His job was to leave them shaken and uncomfortable by denying Bill Moyers the ease and comfort of taking his usual polemical stances without fear of challenge or rebuttal. This would have delighted the few right-wingers tuned in and put the left on notice that they were going to be bloodied when they tried their customary tactics in the future. In order to accomplish this, it was necessary to do two things. First, take the battle to Bill Moyers on his own level by forcing him to take his own advice, figuratively speaking. Second, clearly indicate by your contemptuous manner that you do not respect him and are not treating him as an intellectual equal and an honest broker of ideas. People react not only to what you say but to how you say it. If you respect your opponent, they will sense it and accord him that same respect. If you despise him, this will come through – as it should in this case. That is just as important as the intellectual part of the debate.

In a life-and-death struggle with cannibals, not getting eaten alive can pass for victory.

DRI-284 for week of 7-13-14: Why Big Government is Rotten to the Core: The Tale of the Taxpayers’ Defender Inside Federal Housing

An Access Advertising EconBrief:

Why Big Government is Rotten to the Core: The Tale of the Taxpayers’ Defender Inside Federal Housing

Today the trajectory of our economic lives is pointed steeply downward. This space has been disproportionately devoted to explaining both how and why. That explanation has often cited the theory of government failure, in which the purported objects of government action are subordinated to the desires of politicians, bureaucrats, government employees and consultants. Economists have been excoriated for sins of commission and omission. The resulting loss of personal freedom and marketplace efficiency has been decried. The progressive march toward a totalitarian state has been chronicled.

A recent column in The Wall Street Journal ties these themes together neatly. Mary Kissel’s “Weekend Interview” column of Saturday/Sunday, July 12/13, 2014, is entitled “The Man Who Took On Fannie Mae.” It describes the working life of “career bureaucrat” and economist, Edward DeMarco, whose most recent post was acting director of the Federal Housing Finance Agency. Ms. Kissel portrays him as the man “who fought to protect American taxpayers” and “championed fiscal responsibility” in government. As we shall see, however, he is really integral to the malfunctioning of big government in general and economics in particular.

The Career of Edward DeMarco

Edward DeMarco is that contradictory combination, a career government bureaucrat who is also a trained economist. He received a PhD. in economics from the University of Maryland in the late 1980s and went to work for the General Accounting Office (GAO). As “low man on the totem pole,” he was handed the job of evaluating Fannie Mae and Freddie Mac. They had been around since the 1930s but were known to few and understood by fewer in Congress. The decade-long-drawn-out, painful series of savings-and-loan bailouts had scalded the sensibilities of representatives and regulators alike. DeMarco’s job was to determine if Fannie and Freddie were another bailout landmine lying in wait for detonation.

His answer was: yes. The implicit taxpayer backstop provided to these two institutions – not written into their charter but tacitly acknowledged by everybody in financial markets – allowed them to borrow at lower interest rates than competitors. This meant that they attracted riskier borrowers, which set taxpayers up to take a fall. And the Congressional “oversight” supposedly placing the two under a stern, watchful eye was actually doing the opposite – acting in cahoots with them to expand their empire in exchange for a cut of the proceeds.

DeMarco sounded the alarm in his report. And sure enough, Congress acted. In 1992, it established the Office of Federal Housing Oversight (OFHO). A triumph for government regulation! A vindication of the role of economics in government! A victory for truth, justice and the American way!

Yeah, right.

DeMarco pinned the tail on this donkey right smack on the hindquarters. “‘The Fannie and Freddie Growth Act,'” he called it, “because it told the market ‘Hey, we really care about these guys, and we’re concerned about them because they’re really important.'” In other words, the fix was in: Congress would never allow Fannie and Freddie to fail, and their implicit taxpayer guarantee was good as gold.

This was the first test of DeMarco’s mettle. In that sense, it was the key test, because the result jibed with the old vaudeville punchline, “we’ve already agreed on what you are; now we’re just haggling about the price.” As soon as the ineffectual nature of OFHO crystallized, DeMarco should have screamed bloody murder. But the “low man on the totem pole” in a government bureaucracy can’t do that and still hope for a career; DeMarco would have had to say sayonara to the security of government employment in order to retain his integrity. Instead, he kept his mouth shut.

Kissel discreetly overlooks this because it doesn’t jibe with her picture of DeMarco as heroic whistleblower. She is acting as advocate rather than journalist, as editor rather than reporter.

Any doubts about the fairness of this judgment are dispelled by Kissel’s narrative. “After stints at the Treasury and Social Security Administration, DeMarco found himself working at the very oversight office that his reports to Congress had helped create.” Oh, he “found himself” working there, did he? At the very office that had doublecrossed and betrayed him? “It was 2006, when Fannie and Freddie’s growth had been turbocharged by the government’s mortgages-for-all mania. Mr. DeMarco recalls that during his ‘first couple of weeks’ at the agency, he attended a conference for supervision staffers organized to tell them ‘about great, new mortgage instruments’ – subprime loans, he says, with a sardonic chuckle.” But what exactly did he do about all this while it was in progress, other than chuckling sardonically?

The first twenty years of Edward DeMarco’s career illustrate the workings of big government to a T. They depict the “invisible handshake” between orthodox, mainstream economics and the welfare state that has replaced the “invisible hand” of the marketplace that economics used to celebrate.

The Mainstream Economist as Patsy for Politicians and Bureaucrats

Mainstream economists are trained to see themselves as “social engineers.” Like engineers, they are trained in advanced mathematics. Like engineers, they are trained as generalists in a wide-ranging discipline, but specialize in sub-disciplines – civil, mechanical and chemical engineering for the engineer, macroeconomics and microeconomics for the economist. Like engineers, economists hone their specialties even more finely into sub-categories like monetary economics, international economics, industrial organization, labor economics, financial economics and energy economics. Economists are trained to think of themselves are high theoreticians applying optimizing solutions to correct the failures of human society in general and markets in particular. They take it for granted that they will command both respect and power.

This training sets economists up to be exploited by practical men of power and influence. Lawyers utilize the services of economists as expert witnesses because economists can give quantitative answers to questions that are otherwise little more than blind guesses. Of course, the precision of those quantitative answers is itself suspect. If economists really could provide answers to real-world questions that are as self-assured and precise as they pretend on the witness stand, why would they be wasting their lives earning upper-middle-class money as expert witnesses? Why are they not fabulously rich from – let us say – plying those talents as traders in commodity or financial markets? Still, economists can fall back on the justified defense that nobody else can provide better estimates of (say) wages foregone by an injured worker or business profits lost due to tortious interference. The point is, though, that economists owe their status as experts to default; their claim on expertise is what the late Thorstein Veblen would call “ceremonial.”

When economists enter the realm of politics, they are the veriest babes in the savage wood. Politicians want to take other people’s money and use it for their own – almost always nefarious – purposes. They must present a pretense of legitimacy, competence and virtue. They will use anybody and everybody who is useful to them. Economists hold doctorates; they teach at universities and occupy positions of respect. Therefore, they are ideal fronts for the devices of politicians.

Politicians use economists. They hire them or consult with them or conspicuously call them to testify in Congress. This satisfies the politicians’ debt to competence legitimacy, competence, virtue and conscience (if they have one). Have they not conferred with the best available authority? And having done so, politicians go on to do whatever they intended to do all along. They either ignore the economist or twist his advice to suit their intentions.

That is exactly what happened to Edward DeMarco. His superiors gave him an assignment. Like a dutiful economist, he fulfilled it and sat back waiting for them to act on his advice. They acted, all right – by creating an oversight body that perverted DeMarco’s every word.

Deep down, mainstream economists envision themselves as philosopher kings – either as (eventual) authority figures or as Talleyrands, the men behind the throne who act as ventriloquists to power. When brought face-to-face with the bitter disillusion of political reality, they react either by retreating into academia in a funk or by retreating into their bureaucratic shell. There is a third alternative: occupational prostitution. Some economists abandon their economic principles and become willing mouthpieces for politicians. They are paid in money and/or prestige.

It is clear that DeMarco took the path of bureaucratic compliance. Despite the attempt of WSJ’s Kissel to glamorize his role, his career has obviously been that of follower rather than either leader or whistleblower. His current comments show that he harbors great resentment over being forced to betray his principles in order to make the kind of secure living he craved.

For our purposes, we should see him as the wrong man for the job of taxpayers’ defender. That job required an extraordinary man, not a bureaucrat.

DeMarco, DeMartyr

The second career of Edward DeMarco – that of “DeMarco, DeMartyr” to the cause of fiscal responsibility and taxpayer interests, began after the housing collapse and financial panic of 2008. After bailout out Fannie and Freddie, Congress had to decide whether to close them down or reorganize them. They fell back on an old reliable default option – create a new agency, the Federal Housing Finance Agency, whose job it was to ride herd on the “toxic twins.” When FHFA’s director, James Lockhart, left in August, 2009, Treasury Secretary Timothy Geithner appointed DeMarco as acting director.

DeMarco began by raising executive salaries to stem the exodus of senior management. This got him bad press and hostility from both sides of the Congressional aisle. DeMarco set out to reintroduce the private sector to the mortgage market by reducing loan limits and shrinking the mortgage portfolios of Fannie and Freddie. But we shouldn’t get the wrong idea here – DeMarco wasn’t actually trying to recreate a free market in housing. “I wasn’t trying to price Fannie and Freddie out of the market so much as get the price closer so that the taxpayer capital is getting an appropriate rate of return and that, more important, we start selling off this risk,” DeMarco insists. He was just a meliorist, trying to fine-tune a more efficient economic outcome by the lights of the academic mainstream. Why, he even had the President and the Financial Stability Oversight Council (FSOV) on his side.

Ms. Kissel depicts DeMarco as a staunch reformer who was on his way to turning the housing market around. “Mr. DeMarco’s efforts started show results. Housing prices recovered, both [Fannie and Freddie] started to make money – lots of it – and private insurance eyed getting back into the market. Then in August 2012 the Obama administration decided to ‘sweep’ Fannie and Freddie’s profits, now and in the future, into the government’s coffers. The move left the companies unable to build up capital reserves, and shareholders sued.”

That was just the beginning. DeMarco was pressured by Congress and the administration to write down principal on the loans of borrowers whose homes were “underwater;” e.g., worth less at current market value than the value remaining on the mortgage. He also opposed creation of a proposed housing trust fund (or “slush fund,” as Kissel aptly characterizes it). Apart from the obvious moral hazard involved in systematically redrawing contracts to favor one side of the transaction, DeMarco noted the hazard to taxpayers in giving mortgagees – 80% of whom were still making timely payments – an incentive to default or plead hardship in order to benefit financially. How could mortgage markets attract investment and survive in the face of this attitude?

This intelligent evaluation won him the undying hatred of “members of Congress [and] President Obama’s liberal allies [including] White House adviser Van Jones [who] told the Huffington Post “you could have the biggest stimulus program in America by getting rid of one person;” namely, DeMarco. “Realtors, home builders, the Mortgage Bankers Association, insured depositories and credit unions” fronted for the White House by pressuring DeMarco to “degrade lending standards” to the least creditworthy borrowers – a practice that epitomized the housing bubble at its frothiest. “Protestors organized by progressive groups showed up more than once outside [DeMarco’s] house in Silver Spring, MD, demanding his ouster. A demonstration in April last year brought out 500 picketers with ‘Dump DeMarco’ signs and 15-foot puppets fashioned to look like him. ‘My first reaction was of course one of safety,’ [said DeMarco]. ‘When I first saw them, I was standing a few feet from the window of a ground-level family room and they’re less than 10 feet way through this pane of glass, and it was a crowd of people so big I couldn’t tell how many people were out there. And then all the chanting and yelling started.’ His wife had gone to pick up their youngest daughter…’so I had to get on the phone and tell her ‘Don’t come.’ Then he called the police, who eventually cleared the scene. ‘It was unsettling,’ he says. ‘I think it was meant to be unsettling… They wanted me to start forgiving debt on mortgages.'” This is what Ms. Kissel calls “the multibillion-dollar do-over,” to which “Mr. DeMarco’s resistance made him unpopular in an administration that was anxious to refire the housing market.” Ms. KIssel’s metaphor of government as arsonist is the most gripping writing in the article.

Epilogue at FHFA

Edward DeMarco was the “acting” director at FHFA. The Senate capitulated to pressure for his removal by approving Mel Watt, Majority Leader Harry Reid’s pick, as permanent director. Watt immediately began implementing the agenda DeMarco had resisted. DeMarco had successfully scheduled a series on increases in loan-guarantee fees as one of a series of measures to entice private insurers back into the market. Watt delayed them. He refused to lower loan limits for Fannie and Freddie from their $625,000 level. He directed the two companies to seek out “underserved, creditworthy borrowers;” i.e., people who can’t afford houses. He assured the various constituencies clamoring for DeMarco’s ouster that “government will remain firmly in control of the mortgage market.”

DeMarco’s valedictory on all this is eye-opening in more ways than one. Reviewing what Ms. Kissel primly calls “government efforts to promote affordable housing,” DeMarco dryly observes, “‘Let’s say it was a failed effort…To me, if you go through a 50-year period, and you do all these things to promote housing, and the homeownership rate is [the same as it was 50 years ago], I think the market’s telling you we’re at an equilibrium.’ If we assume “that only government can foster homeownership among people ‘below median income,’ that ‘suggests a troubling view of markets themselves.'”

And now the whole process is starting all over again. “If we have another [sic] recession, if there’s some foreign crisis that …affects our economy, it doesn’t matter whatever the instigating event is, the point is that if we have another round of house-price declines like we’ve had, we’re going erode most of that remaining capital support.” Characteristically, he refuses to forthrightly state the full implications of his words, which are: We are tottering on the brink of full-scale financial collapse.

Edward DeMarco: Blackboard Economist

The late Nobel laureate Ronald Coase derided what he called “blackboard economists” – the sort who pretended to solve practical problems by proposing a theoretical solution that assumed they possessed information they didn’t and couldn’t have. (Usually the solution came in the form of either mathematical equations or graphical geometry depicted on a classroom blackboard, hence the term.)

Was Coase accusing his fellow economists of laziness? Yes and no. Coase believed that transactions costs were a key determinant of economic outcomes. Instead of investigating transactions costs of action in particular cases, economists were all too prone to assume those costs were either zero (allowing markets to work perfectly) or prohibitive (guaranteeing market failure). Coase insisted that this was pure laziness on the part of the profession.

But information isn’t just lying around in the open waiting for economists to discover it. One of Coase’s instructors at the London School of Economics, future Nobel laureate F.A. Hayek, pointed out that orthodox economic theory assumed that everybody already knew all the information needed to make optimal decisions. In reality, the relevant information was dispersed in fragmentary form inside the minds of billions of people rather than concentrated in easily accessible form. The market process was not a mere formality of optimization using given data. Instead, it was markets that created the incentives and opportunities for the generation and collation of this fragmented, dispersed information into usable form.

Blackboard economists were not merely lazy. They were unforgivably presumptuous. They assumed that they had the power to effectuate what could only be done by markets, if at all.

That lends a tragic note to Ms. Kissel’s assurance that “Mr. DeMarco isn’t against government support for housing – if done properly.” After spending his career as “the loneliest man in government” while fighting to stem the tide of the housing bubble, Edward DeMarco now confesses that he doesn’t oppose government interference in the housing market after all! The problem is that the government didn’t ask him how to go about it – they didn’t apply just the right optimizing formula, didn’t copy his equations off the blackboard.

And when President Obama and Treasury Secretary Geithner and the housing lobbyists and the realtors and builders and mortgage bankers and lenders and progressive ideologues hear this explanation, what is their reaction? Do they smack their foreheads and cry out in dismay? Do they plead, “Save us from ourselves, Professor DeMarco?”

Not hardly. The mounted barbarians run roughshod over Mr. DeMarco waving his blackboard formula and leave him rolling in the dust. They then park their horses outside Congress and testify that “See? He’s in favor of government intervention, just as we are – we’re just haggling about the price.” Politicians with a self-interested agenda correctly view any attempt at compromise as a sign of weakness, an invitation to “let’s make a deal.” It invokes contempt rather than respect.

That is exactly what happened to Edward DeMarco. He is left licking the wounds of 25 years of government service and whining about the fact that the fact that politicians are self-interested, that government regulators do not really regulate but in fact serve the interests of the regulated, that the political left wing will stop at nothing, including physical intimidation and force.

No spit, Spurlock. We are supposed to stand up and cheer for a man who is only now learning this after spending 25 years in the belly of the savage beast? Whose valiant efforts at reform consisted of recommending optimizing nips and tucks in the outrageous government programs he supervised? Whose courageous farewell speech upon being run out of office, a la Douglas MacArthur, is “I’m not against government support for housing if done properly?”

Valedictory for Edward DeMarco

The sad story of Edward DeMarco is surely one more valuable piece of evidence confirming the theory of big government as outlined in this space. Those who insist that government is really full of honest, hard-working, well-meaning people full of idealistic good intentions doing a dirty job the best they can will now have an even harder time saying it with a straight face. It is one thing when big government opposes exponents of laissez faire; we expect bank robbers to shoot at the police. But gunning down an innocent bystander for shaking his fist in reproof shows that the robber is a hardened killer rather than a starving family man. When the welfare state steamrolls over an Edward DeMarco’s efforts to reform it at the margins, it should be clear to one and all that big government is rotten to the core.

Even so, the fact that Edward DeMarco was and is an honest man who thought he was doing good does not make him a hero. Edward DeMarco is not a martyr. He is a cautionary example. The only way to counteract big government is to oppose it openly and completely by embracing free markets. Anything less fails while giving aid and comfort to the enemy. Failure coupled with career suicide can only be redeemed by service to the clearest and noblest of principles.

DRI-254 for week of 7-6-14: The Selling of Environmentalism

An Access Advertising EconBrief:

The Selling of Environmentalism

The word “imperialism” was coined by Lenin to define a process of exploitation employed by developed nations in the West on undeveloped colonies in the Eastern hemisphere. In recent years, though, it has been used in a completely different context – to describe the use of economic logic to explain practically everything in the world. Before the advent of the late Nobel laureate Gary Becker, economists were parochial in their studies, confining themselves almost exclusively to the study of mankind in its commercial and mercantile life. Becker trained the lens of economic theory on the household, the family and the institution of marriage. Ignoring the time-honored convention of treating “capital” as plant and equipment, he (along with colleagues like Theodore Schultz) treated human beings as the ultimate capital goods.

Becker ripped the lid off Pandora’s Box and the study of society will never be the same again. We now recognize that any and every form of human behavior might profitably be seen in this same light. To be sure, that does not mean employing the sterile and limiting tools of the professional economist; namely, advanced mathematics and formal statistics. It simply means subjecting human behavior to the logic of purposeful action.

Environmentalism Under the Microscope

The beginnings of the environmental movement are commonly traced to the publication of Silent Spring in 1961 by marine biologist Rachel Carson. That book sought to dramatize the unfavorable effects of pesticides, industrial chemicals and pollution upon wildlife and nature. Carson had scientific credentials – she had previously published a well-regarded book on oceanography – but this book, completed during her terminal illness, was a polemic rather than a sober scientific tract. Its scientific basis has been almost completely undermined in the half-century since publication. (A recent book devoted entirely to re-examination of Silent Spring by scientific critics is decisive.) Yet this book galvanized the movement that has since come to be called environmentalism.

An “ism” ideology is, or ought to be, associated with a set of logical propositions. Marxism, for example, employs the framework of classical economics as developed by David Ricardo but deviates in its creation of the concept of “surplus value” as generated by labor and appropriated by capitalists. Capitalism is a term intended invidiously by Marx but that has since morphed into the descriptor of the system of free markets, private property rights and limited government. What is the analogous logical system implied by the term “environmentalism?”

There isn’t one. Generically, the word connotes an emotive affinity for nature and corresponding distaste for industrial civilization. Beyond that, its only concrete meaning is political. The problem of definition arises because, in and of itself, an affinity for nature is insufficient as a guide to human action. For example, consider the activity of recycling. Virtually everybody would consider it de rigueur as part of an environmentalist program. The most frequent stated purpose of recycling is to relieve pressure on landfills, which are ostensibly filling up with garbage and threatening to overwhelm humanity. The single greatest component of landfills is newsprint. But the leachates created by the recycling of newsprint are extremely harmful to” the environment;” e.g., their acidic content poisons soils and water and they are very costly to divert. We have arrived at a contradiction – is recycling “good for the environment” or “bad for the environment?” There is no answer to the question as posed; the effects of recycling are couched in terms of tradeoffs. In other words, the issue is dependent on economics, not emotion only.

No matter where we turn, “the environment” confronts us with such tradeoffs. Acceptance of the philosophy of environmentalism depends on getting us to ignore these tradeoffs by focusing on one side and ignoring the other. Environmental advocates of recycling, for instance, customarily ignore the leachates and robotically beat the drums for mandatory recycling programs. When their lopsided character is exposed, environmentalists retreat to the carefully prepared position that the purity of their motives excuses any lapses in analysis and overrides any shortcomings in their programs.

Today’s economist does not take this attitude on faith. He notes that the political stance of environmentalists is logically consistent even if their analysis is not. The politics of environmentalism can be understood as a consistent attempt to increase the real income of environmentalists in two obvious ways: first, by redistributing income in favor of their particular preferences for consumption (enjoyment) of nature; and second, by enjoying real income in the form of power exerted over people whose freedom they constrain and real income they reduce through legislation and administrative and judicial fiat.

Thus, environmentalism is best understood as a political movement existing to serve economic ends. In order to do that, its adherents must “sell” environmentalism just as a producer sells a product. Consumers “buy” environmentalism in one of two ways: by voting for candidates who support the legislation, agencies, rules and rulings that further the environmental agenda; and by donating money to environmental organizations that provide real income to environmentalists by employing them and lobbying for the environmental agenda.

Like the most successful consumer products, environmentalism has many varieties. Currently, the most popular and politically successful one is called “climate change,” which is a model change from the previous product, “global warming.” In order to appreciate the economic theory of environmentalism, it is instructive to trace the selling of this doctrine in recent years.

Why Was the Product Called “Climate Change” Developed?

The doctrine today known as “climate change” grew out of a long period of climate research on a phenomenon called “global warming.” This began in the 1970s. Just as businessmen spent years or even decades developing products, environmentalists use scientific (or quasi-scientific) research as their product-development laboratory, in which promising products are developed for future presentation on the market. Although global warming was “in development” throughout the 1970s and 80s, it did not receive its full “rollout” as a full-fledged environmental product until the early 1990s. We can regard the publication of Al Gore’s Earth in the Balance in 1992 as the completed rollout of global warming. In that book, Gore presented the full-bore apocalyptic prophesy that human-caused global warming threatened the destruction of the Earth within two centuries.

Why was global warming “in development” for so long? And after spending that long in development limbo, why did environmentalists bring it “to market” in the early 1990s? The answers to these questions further cement the economic theory of environmentalism.

Global warming joined a long line of environmental products that were brought to market beginning in the early 1960s. These included conservation, water pollution, air pollution, species preservation, forest preservation, overpopulation, garbage disposal, inadequate food production, cancer incidence and energy insufficiency.The most obvious, logical business rationale for a product to be brought to market is that its time has come, for one or more reasons. But global warming was brought to market by a process of elimination. All of the other environmental products were either not “selling” or had reached dangerously low levels of “sales.” Environmentalists desperately needed a flagship product and global warming was the only candidate in sight. Despite its manifest deficiencies, it was brought to market “before its time;” e.g., before its scientific merits had been demonstrated. In this regard, it differed from most (although not all) of the previous environmental products.

Those are the summary answers to the two key questions posed above. Global warming (later climate change) spent decades in development because its scientific merits were difficult if not impossible to demonstrate. It was brought to market in spite of that limitation because environmentalists had no other products with equivalent potential to provide real income and had to take the risks of introducing it prematurely in order to maintain the “business” of environmentalism as a going concern. Each of these contentions is fleshed out below.

The Product Maturation Suffered by Environmentalism

Businesses often find that their products lead limited lives. These limitations may be technological, competitive or psychological. New and better processes may doom a product to obsolescence. Competitors may imitate a product into senescence or even extinction. Fads may simply lose favor with consumers after a period of infatuation.

As of the early 1990s, the products offered by environmentalism were in various stages of maturity, decline or death.

Air pollution was a legitimate scientific concern when environmentalism adopted it in the early 1960s. It remains so today because the difficulty of enforcing private property rights in air make a free-market solution to the problem of air pollution elusive. But by the early 1990s, even the inefficient solutions enforced by the federal government had reduced the problem of air pollution to full manageability.

Between 1975 and 1991, the six air pollutants tracked by the Environmental Protection Agency (EPA) fell between 24% and 94%. Even if we go back to 1940 as a standard of comparison – forcing us to use emissions as a proxy for the pollution we really want to measure, since the latter wasn’t calculated prior to 1975 – we find that three of the six were lower in 1991 and total emissions were also lower in 1991. (Other developed countries showed similar progress during this time span.)

Water pollution was already decreasing when Rachel Carson wrote and continued to fall throughout the 1960s, 70s and 80s. The key was the introduction of wastewater treatment facilities to over three-quarters of the country. Previously polluted bodies of water like the CuyahogaRiver, the AndroscogginRiver, the northern Hudson River and several of the Great Lakes became pure enough to host sport-fishing and swimming. The Mississippi River became one of the industrialized world’s purest major rivers. Unsafe drinking water became a non-problem. Again, this was accomplished despite the inefficient efforts of local governments, the worst of these being the persistent refusal to price water at the margin to discourage overuse.

Forests were thriving in the early 1990s, despite the rhetoric of environmental organizations that inveighed against “clear-cutting” by timber companies. In reality, the number of wooded acres in the U.S. had grown by 20% over the previous two decades. The state of Vermont had been covered 35% by forest in the late nineteenth century. By the early 1990s, this coverage had risen to 76%.

This improvement was owed to private-sector timber companies, who practiced the principle of “sustainable yield” timber management. By the early 1990s, annual timber growth had exceeded harvest every year since 1952. By 1992, the actual timber harvest was a miniscule 384,000 acres, six-tenths of 1% of the land available for harvest. Average annual U.S. wood growth was three times greater than in 1920.

Environmentalists whined about the timberlands opened up for harvest by the federal government in the national parks and wildlife refuges, but less logging was occurring in the National Forests than at any time since the early 1950s. Clear-cut timber was being replaced with new, healthier stands that attracted more wildlife diversity than the harvested “old-growth” forest.

As always, this progress occurred in spite of government, not because of it. The mileage of roads hacked out of national-park land by the Forest Service is three times greater that of the federal Interstate highway system. The subsidized price at which the government sells logging rights on park land is a form of corporate welfare for timber companies. But the private sector bailed out the public in a manner that would have made John Muir proud.

Garbage disposal and solid-waste management may have been the most unheralded environmental victory won by the private sector. At the same time that Al Gore complained that “the volume of garbage is now so high that we are running out of places to put it,” modern technology had solved the problem of solid-waste disposal. The contemporary landfill had a plastic bottom and clay liner that together prevent leakage. It was topped with dirt to prevent odors and run-off. The entire U.S. estimated supply of solid waste for the next 500 years could be safely stored in one landfill 100-yards deep and 20 miles on a side. The only problem with landfills was a siting problem, owing to the NIMBY (“not in my back yard”) philosophy fomented by environmentalism. The only benefit to be derived from recycling could be had from private markets by recycling only those materials whose benefits (sales revenue) exceeded their reclamation costs (including a “normal” profit).

Overpopulation was once the sales leader of environmentalism. In 1968’s The Population Bomb, leading environmentalist Paul Ehrlich wrote that “the battle to feed all of humanity is over. In the 1970s, the world will undergo famines – hundreds of millions of people are going to starve to death in spite of any crash programs embarked upon now. At this late date, nothing can prevent a substantial increase in the world death rate….” Ehrlich also predicted food riots and plummeting life expectancy in the U.S. and biological death for a couple of the Great Lakes.

Ehrlich was a great success at selling environmentalism. His book, and its 1990 sequel The Population Explosion, sold millions of copies and recruited untold converts to the cause. Unfortunately, his product had a limited shelf life because his prophecies were spectacularly inaccurate. The only famines were politically, not biologically, triggered; deaths were in the hundreds of thousands, not millions. Death rates declined instead of rising. The Great Lakes did not die; they were completely rehabilitated. Even worse, Ehrlich made a highly publicized bet with economist Julian Simon that the prices of five metals handpicked by Ehrlich would rise in real terms over a ten-year period. (The loser would pay the algebraic sum of the price changes incurred.) The prices went down in nominal terms despite the rising general level of prices over the interval – another spectacular prophetic failure by Ehrlich.

It’s not surprising that Ehrlich, rather than the population, bombed. In the 1960s, the world’s annual population growth was about 2.0%. By the 1990s, it would fall to 1.6%. (Today, of course, our problem is falling birth rates – the diametric opposite of that predicted by environmentalism.)

Therefore, the phantom population growth predicted by environmentalism did not comprise one component of the inadequate food supply foreseen with uncanny inaccuracy by environmentalists. Ehrlich and others had foreseen a Malthusian scenario in which rising population growth overtook diminishing agricultural productivity. They were just as wrong about productivity as about population. The Green Revolution ushered in by Norman Borlaug et al drove one of the world’s leading agricultural economists to declare that “the scourge of famine due to natural causes has been almost conquered….”

The other leg of environmentalism’s collapsing doomsday scenario of inadequate food was based on cancer incidence. Not only would the food supply prove insufficient, according to environmentalists, it was also unsafe. Industrial chemicals and pesticides were entering the food supply through food residues and additives. They were causing cancer. How did we know this? Tests on animals – specifically, on mice and rats – proved it.

There was only one problem with this assertion. Scientifically speaking, it was complete hooey. The cancer risk of one glass of wine was about 10,000 -12,000 times greater than that posed by the additives and pesticide residues (cumulatively) in most food products. Most of our cancer risk comes from natural sources, such as sunlight and natural pesticides produced by plants. Some of these occur in common foods. Still, cancer rates had remained steady or fallen over the previous fifty years except for lung cancers attributable to smoking and melanomas attributable to ultraviolet light. Cancer rates among young adults had decreased rapidly. Age-adjusted death rates had mostly fallen.

Energy insufficiency had been brought to market by environmentalists in the 1970s, during the so-called Energy Crisis. It sold well when OPEC was allowed to peg oil prices at stratospheric levels. But when the Reagan administration decontrolled prices, domestic production rose and prices fell. As the 1990s rolled around, environmentalists were reduced to citing on “proven reserves” of oil (45 years) and natural gas (63 years) as “proof” that we would soon run out of fossil fuels and energy prices would then skyrocket. Of course, this was more hooey; proven reserves are the energy equivalent of inventory. Businesses hold inventory as the prospective benefits and costs dictate. Current inventories say nothing about the long-run prospect of shortages.

In 1978, for example, proven reserves of oil stood at 648 billion barrels, or 29.2 years’ worth at current levels of usage. Over the next 14 years, we used about 84 billion barrels, but – lo and behold – proven reserves rose to nearly a billion barrels by 1992. That happened because it was now profitable to explore for and produce oil in a newly free market of fluctuating oil prices, making it cost-efficient to hold larger inventories of proven reserves. (And in today’s energy market, it is innovative technologies that are driving discoveries and production of new shale oil and gas.) Really, it is an idle pastime to estimate the number of years of “known” resources remaining because nobody knows how much of a resource remains. It is not worth anybody’s time to make an accurate estimate; it is easier and more sensible to simply let the free market take its course. If the price rises, we will produce more and discover more reserves to hold as “inventory.” If we can’t find any more, the resultant high prices will give us the incentive to invent new technologies and find substitutes for the disappearing resource. That is exactly what has just happened with the process called “fracking.” We have long known that conventional methods of oil drilling left 30-70% of the oil in the ground because it was too expensive to extract. When oil prices rose high enough, fracking allowed us to get at those sequestered supplies. We knew this in the early 1990s, even if we didn’t know exactly what technological process we would ultimately end up using.

Conservation was the first product packaged and sold by environmentalism, long predating Rachel Carson. It dated back to the origin of the national-park system in Theodore Roosevelt’s day and the times of John Muir and John Jacob Audubon. By the early 1990s, conservation was a mature product. The federal government was already the biggest landowner in the U.S. We already had more national parks than the federal government could hope to manage effectively. Environmentalists could no longer make any additional sales using conservation as the product.

Just about the only remaining salable product the environmentalists had was species preservation. Environmentalism flogged it for all it was worth, but that wasn’t much. After the Endangered Species Act was passed and periodic additions made to its list, what was left to do? Not nearly enough to support the upper-middle-class lifestyles of a few million environmentalists. (It takes an upper-middle-class income to enjoy the amenities of nature in all their glory.)

Environmentalism Presents: Global Warming

In the late 1980s, the theory that industrial activity was heating up the atmosphere by increasing the amount of carbon dioxide in the air began to gain popular support. In 1989, Time Magazine modified its well-known “Man of the Year” award to “Planet of the Year,” which it gave to “Endangered Earth.” It described the potential effects of this warming process as “scary.” The International Panel on Climate Change, an organization of environmentalists dedicated to selling their product, estimated that warming could average as much as 0.5 degrees Fahrenheit per decade over the next century, resulting in a 5.4 degree increase in average temperature. This would cause polar ice caps to melt and sea levels to rise, swamping coastal settlements around the world – and that was just the beginning of the adverse consequences of global warming.

No sooner had rollout begun than the skepticism rolled in along with the new product. Scientists could prove that atmospheric carbon dioxide was increasing and that industrial activity was behind that, but it could not prove that carbon dioxide was causing the amount of warming actually measured. As a matter of fact, there wasn’t actually an unambiguous case to be made for warming. What warming could be found had mostly occurred at night, in the winter and in the Southern Hemisphere (not the locus of most industrial activity). And to top it all off, it is not clear whether or not we should ascribe warming to very long-run cyclical forces that have alternated the Earth between Ice Ages and tropical warming periods for many thousands of years. By 1994, Time Magazine (which needed a continuous supply of exciting new headlines just as much as environmentalists needed a new supply of products with which to scare the public) had given up on global warming and resuscitated a previous global-climate scare from the 1970s, the “Coming Ice Age.”

It is easy to see the potential benefits of the global-warming product for environmentalists. Heretofore, almost all environmentalist products had an objective basis. That is, they spotlighted real problems. Real problems have real solutions, and the hullabaloo caused by purchase of those products led to varying degrees of improvement in the problems. Note this distinction: the products themselves did not cause or lead to the improvement; it was the uproar created by the products that did the job. Most of the improvement was midwived by economic measures, and environmentalism rejects economics the way vampires reject the cross. This put environmentalists in an anomalous position. Their very (indirect) success had worked against them. Their real income was dependent on selling environmentalism in any of various ways. Environmentalists cannot continue to sell more books about (say) air pollution when existing laws, regulations and devices have brought air quality to an acceptable level. They cannot continue to pass more coercive laws and regulations when the legally designated quality has been reached. Indeed, they will be lucky to maintain sales of previously written books to any significant degree. They cannot continue to (credibly) solicit donations on the strength of a problem that has been solved, or at least effectively managed.

Unfortunately for environmentalists, the environmental product is not like an automobile that gives service until worn out and needs replacement, ad infinitum. It is more like a vaccine that, once taken, needn’t be retaken. Once the public has been radicalized and sensitized to the need for environmentalism, it becomes redundant to keep repeating the process.

Global warming was a new kind of product with special features. Its message could not be ignored or softened. Either we reform or we die. There was no monkeying around with tradeoffs.

Unlike the other environmental products, global warming was not a real problem with real solutions. But that was good. Real problems get solved – which, from the environmentalist standpoint, was bad. Global warming couldn’t even be proved, let alone solved. That meant that we were forced to act and there could be no end to the actions, since they would never solve the problem. After all, you can’t solve a problem that doesn’t exist in the first place! Global warming, then, was the environmentalist gift that would keep on giving, endlessly beckoning the faithful, recruiting ever more converts to the cause, ringing the cash register with donations and decorating the mast of environmentalism for at least a century. Its very scientific dubiety was an advantage, since that would keep it in the headlines and keep its critics fighting against it – allowing environmentalists the perfect excuse to keep pleading for donations to fend off the evil global-warming deniers. Of course, lack of scientific credibility is also a two-edged sword, since environmentalists cannot force the public to buy their products and can never be quite sure when the credibility gap will turn the tide against them.

When you’re selling the environmentalist product, the last thing you want is certainty, which eliminates controversy. Controversy sells. And selling is all that matters. Environmentalists certainly don’t want to solve the problem of global warming. If the problem is solved, they have nothing left to sell! And if they don’t sell, they don’t eat, or at least they don’t enjoy any real income from environmentalism. Environmentalism is also aimed at gaining psychological benefits for its adherents by giving their lives meaning and empowering them by coercing people with whom they disagree. If there is no controversy and no problem, there is nothing to give their lives meaning anymore and no basis for coercing others.

The Economic Theory of Environmentalism

Both environmentalists and their staunchest foes automatically treat the environmental movement as a romantic crusade, akin to a religion or a moral reform movement. This is wrong. Reformers or altruists act without thought of personal gain. In contrast, environmentalists are self-interested individuals in the standard tradition of economic theory. Some of their transactions lie within the normal commercial realm of economics and others do not, but all are governed by economic logic.

That being so, should we view environmentalism in the same benign light as we do any other industry operating in a free market? No, because environmentalists reject the free market in favor of coercion. If they were content to persuade others of the merits of their views, their actions would be unexceptional. Instead, they demand subservience to their viewpoint via legal codification and all forms of legislative, executive, administrative and judicial tyranny. Their adherents number a few would-be dictators and countless petty dictators. Their alliance with science is purely opportunistic; one minute they accuse their opponents of being anti-scientific deniers and the next they are praying to the idol of Gaia and Mother Earth.

The only thing anti-environmentalists have found to admire about the environmental movement is its moral fervor. That concession is a mistake.

DRI-292 for week of 6-29-14: One in Six American Children is Hungry – No, Wait – One in Five!

An Access Advertising EconBrief:

One in Six American Children is Hungry – No, Wait – One in Five!

You’ve heard the ad. A celebrity – or at least somebody who sounds vaguely familiar, like singer Kelly Clarkson – begins by intoning somberly: “Seventeen million kids in America don’t know where their next meal is coming from or even if it’s coming at all.” One in six children in America is hungry, we are told. And that’s disgraceful, because there’s actually plenty of food, more than enough to feed all those hungry kids. The problem is just getting the food to the people who need it. Just make a donation to your local food pantry and together we can lick hunger in America. This ad is sponsored by the Ad Council and Feeding America.

What was your reaction? Did it fly under your radar? Did it seem vaguely dissonant – one of those things that strikes you wrong but leaves you not quite sure why? Or was your reaction the obvious one of any intelligent person paying close attention – “Huh? What kind of nonsense is this?”

Hunger is not something arcane and mysterious. We’ve all experienced it. And the world is quite familiar with the pathology of hunger. Throughout human history, hunger has been mankind’s number one enemy. In nature, organisms are obsessed with absorbing enough nutrients to maintain their body weight. It is only in the last few centuries that tremendous improvements in agricultural productivity have liberated us from the prison of scratching out a subsistence living from the soil. At that point, we began to view starvation as atypical, even unthinkable. The politically engineered famines that killed millions in the Soviet Union and China were viewed with horror; the famines in Africa attracted sympathy and financial support from the West. Even malnutrition came to be viewed as an aberration, something to be cured by universal public education and paternalistic government. In the late 20th century, the Green Revolution multiplied worldwide agricultural productivity manifold. As the 21st century dawned, the end of mass global poverty and starvation beckoned within a few decades and the immemorial problem of hunger seemed at last to be withering away.

And now we’re told that in America – for over a century the richest nation on Earth – our children – traditionally the first priority for assistance of every kind – are hungry at the ratio of one in six?

WHAT IS GOING ON HERE?

The Source of the Numbers – and the Truth About Child Hunger

Perhaps the most amazing thing about these ads, which constitute a full-fledged campaign, is the general lack of curiosity about their origins and veracity. Seemingly, they should have triggered a firestorm of criticism and investigation. Instead, they have been received with yawns.

The ads debuted last Fall. They were kicked off with an article in the New York Times on September 5, 2013, by Jane L. Levere, entitled “New Ad Campaign Targets Childhood Hunger.” The article is one long promotion for the ads and for Feeding America, but most of all for the “cause” of childhood hunger. That is, it takes for granted that a severe problem of childhood hunger exists and demands close attention.

The article cites the federal government as the source for the claim that “…close to 50 million Americans are living in ‘food insecure’ households,” or ones in which “some family members lacked consistent access throughout the year to adequate food.” It claims that “…almost 16 million children, or more than one in 5, face hunger in the United States.”

The ad campaign is characterized as “the latest in a long collaboration between Ad Council and Feeding America, ” which supplies some 200 food banks across the country that in turn supply more than 61,000 food pantries, soup kitchens and shelters. Feeding America began in the late 1990s as another organization, America’s Second Harvest, which enlisted the support of A-list celebrities such as Matt Damon and Ben Affleck. This was when the partnership with the Ad Council started.

Priscilla Natkins, a Vice-President of Ad Council, noted that in the early days “only” one out of 10 Americans was hungry. Now the ratio is 1 out of 7 and more than 1 out of 5 children. “We chose to focus on children,” she explained, “because it is a more poignant approach to illustrating the problem.”

Further research reveals that, mirabile dictu, this is not the first time that these ads have received skeptical attention. In 2008, Chris Edwards of Cato Institute wrote about two articles purporting to depict “hunger in America.” That year, the Sunday supplement Parade Magazine featured an article entitled “Going Hungry in America.” It stated that “more than 35.5 million Americans, more than 12% of the population and 17% of our children, don’t have enough food, according to the Department of Agriculture.” Also in 2008, the Washington Post claimed that “about 35 million Americans regularly go hungry each year, according to federal statistics.”

Edwards’ eyebrows went up appropriately high upon reading these accounts. After all, this was even before the recession had been officially declared. Unlike the rest of the world, though, Edwards actually resolved to verify these claims. This is what Edwards found upon checking with the Department of Agriculture.

In 2008, the USDA declared that approximately 24 million Americans were living in households that faced conditions of “low food security.” The agency defined this condition as eating “less varied diets, participat[ing] in Federal food-assistance programs [and getting] emergency food from community food pantries.” Edwards contended that this meant those people were not going hungry – by definition. And indeed, it is semantically perverse to define a condition of hunger by describing the multiple sources of food and change in composition of food enjoyed by the “hungry.”

The other 11 million (of the 35 million figure named in the two articles) people fell into a USDA category called “very low food security.” These were people whose “food intake was reduced at times during the year because they had insufficient money or other resources for food” [emphasis added]. Of these, the USDA estimated that some 430,000 were children. These would (then) comprise about 0.6% of American children, not the 17% mentioned by Parade Magazine, Edwards noted. Of course, having to reduce food on one or more occasions to some unnamed degree for financial reasons doesn’t exactly constitute “living in hunger” in the sense of not knowing where one’s next meal was coming from, as Edwards observed. The most that could, or should, be said was that the 11 million and the 430,000 might constitute possible candidates for victims of hunger.

On the basis of this cursory verification of the articles’ own sources, Chris Edward concluded that hunger in America ranked with crocodiles in the sewers as an urban myth.

We can update Edwards’ work. The USDA figures come from survey questions distributed and tabulated by the Census Bureau. The most recent data available were released in December 2013 for calendar year 2012. About 14.5% of households fell into the “low food security” category and about 5.7% of households were in the “very low food security” pigeonhole. Assuming the current average of roughly 2.58 persons per household, this translates to approximately 34 million people in the first category and just under 13.5 million people in the second category. If we assume the same fraction of children in these at-risk households as those in 2008, that would imply about 635,000 children in the high-risk category, or less than 0.9 of 1% of the nation’s children. That is a far cry from the 17% of the nation’s children mentioned in the Washington Post article of 2008. It is a farther cry from the 17,000,000 children mentioned in current ads, which would be over 20% of America’s children.

The USDA’s Work is From Hunger

It should occur to us to wonder why the Department of Agriculture – Agriculture, yet – should now reign as the nation’s arbiter of hunger. As it happens, economists are well situated to answer that question. They know that the federal food-stamp began in the 1940s primarily as a way of disposing of troublesome agricultural surpluses. The federal government spent the decade of the 1930s throwing everything but the kitchen sink at the problem of economic depression. Farmers were suffering because world trade had imploded; each nation was trying to protect its own businesses by taxing imports of foreign producers. Since the U.S. was the world’s leading exporter of foodstuffs, its farmers were staggering under this impact. They were swimming in surpluses and bled so dry by the resulting low prices that they burned, buried or slaughtered their own output without bringing it to market in an effort to raise food prices.

The Department of Agriculture devised various programs to raise agricultural prices, most of which involved government purchases of farm goods to support prices at artificially high levels. Of course, that left the government with lots of surplus food on its hands, which it stored in Midwestern caves in a futile effort to prevent spoilage. Food distribution to the poor was one way of ridding itself of these surpluses, and this was handled by the USDA which was already in possession of the food.

Just because the USDA runs the food-stamp program (now run as a debit-card operation) doesn’t make it an expert on hunger, though. Hunger is a medical and nutritional phenomenon, not an agricultural one. Starvation is governed by the intake of sufficient calories to sustain life; malnutrition is caused by the maldistribution of nutrients, vitamins and minerals. Does the Census Bureau survey doctors on the nutritional status of their patients to provide the USDA with its data on “food insecurity?”

Not hardly. The Census Bureau simply asks people questions about their food intake and solicits their own evaluation of their nutritional status. Short of requiring everybody to undergo a medical evaluation and submit the findings to the government, it could hardly be otherwise. But this poses king-sized problems of credibility for the USDA. Asking people whether they ever feel hungry or sometimes don’t get “enough” food is no substitute for a medical evaluation of their status.

People can and do feel hungry without coming even close to being hungry in the sense of risking starvation or even suffering a nutritional deficit. Even more to the point, their feelings of hunger may signal a nutritional problem that cannot be cured by money, food pantries, shelters or even higher wages and salaries. The gap between the “low food security” category identified by the USDA and starving peoples in Africa or Asia is probably a chasm the size of the Grand Canyon.

The same America that is supposedly suffering rampant hunger among both adults and children is also supposedly suffering epidemics of both obesity and diabetes. There is only one way to reconcile these contradictions: by recognizing that our “hunger” is not the traditional type but rather the kind associated with diabetes (hence, obesity) rather than the traditional sort of starvation or malnutrition. Over-ingestion of simple carbohydrates and starches can often cause upward spikes in blood sugar among susceptible populations, triggering the release of insulin that stores the carbohydrate as fat. Since the carbohydrate is stores as fat rather than burned for energy, the body remains starved for energy and hungry even though it is getting fat. Thus do hunger and obesity coexist.

The answer is not more government programs, food stamps, food pantries and shelters. Nor, for that matter, is it more donations to non-profit agencies like Feeding America. It is not more food at all, in the aggregate. Instead, the answer is a better diet – something that millions of Americans have found out for themselves in the last decade or so. In the meantime, there is no comparison between the “hunger” the USDA is supposedly measuring and the mental picture we form in our minds when we think of hunger.

This is not the only blatant contradiction raised by the “hunger in America” claims. University of Chicago economist Casey Mulligan, in his prize-winning 2012 book The Redistribution Recession, has uncovered over a dozen government program and rule changes that reduced the incentive to work and earn. He assigns these primary blame for the huge drop in employment and lag in growth that the U.S. has summered since 2007. High on his list are the changes in the food-stamp program that substituted a debit card for stamps, eliminated means tests and allowed recipients to remain on the program indefinitely. A wealthy nation in which 46 million out of 315 million citizens are on the food dole cannot simultaneously be suffering a problem of hunger. Other problems, certainly – but not that one.

What About the Real Hunger?

That is not to say that real hunger is completely nonexistent in America. Great Britain’s BBC caught word of our epidemic of hunger and did its own story on it, following the New York Times, Washington Post, Parade Magazine party line all the way. The BBC even located a few appropriately dirty, ragged children for website photos. But the question to ask when confronted with actual specimens of hunger is not “why has capitalism failed?” or “why isn’t government spending enough money on food-security programs?” The appropriate question is “why do we keep fooling ourselves into thinking that more government spending is the answer when the only result is that the problem keeps getting bigger?” After all, the definition of insanity is doing the same thing over and over again and expecting a different result.

The New York Times article in late 2013 quoted two academic sources that were termed “critical” of the ad campaign. But they said nothing about its blatant lies and complete inaccuracy. No, their complaint was that it promoted “charity” as the solution rather than their own pet remedies, a higher minimum wage and more government programs. This calls to mind the old-time wisecrack uttered by observers of the Great Society welfare programs in the 1960s and 70s: “This year, the big money is in poverty.” The real purpose of the ad campaign is to promote the concept of hunger in America in order to justify big-spending government programs and so-called private programs that piggyback on the government programs. And the real beneficiaries of the programs are not the poor and hungry but the government employees, consultants and academics whose jobs depend on the existence of “problems” that government purports to “solve” but that actually get bigger in order to justify ever-more spending for those constituencies.

That was the conclusion reached, ever so indirectly and delicately, by Chris Edwards of Cato Institute in his 2008 piece pooh-poohing the “hunger in America” movement. It applies with equal force to the current campaign launched by non-profits like the Ad Council and Feeding America, because the food banks, food pantries and shelters are supported both directly and indirectly by government programs and the public perception of problems that necessitate massive government intervention. It is the all-too-obvious answer to the cry for enlightenment made earlier in this essay.

In this context, it is clear that the answer to any remaining pockets of hunger is indeed charity. Only private, voluntary charity escapes the moral hazard posed by the bureaucrat/consultant class that has no emotional stake in the welfare of the poor and unfortunate but a big stake in milking taxpayers. This is the moral answer because it does not force people to contribute against their will but does allow them to exercise free will in choosing to help their fellow man. A moral system that works must be better than an immoral one that fails.

Where is the Protest?

The upshot of our inquiry is that the radio ads promoting “hunger in America” and suggesting that America’s children don’t know where their next meal is coming from are an intellectual fraud. There is no evidence that those children exist in large numbers, but their existence in any size indicts the current system. Rather than rewarding the failure of our current immoral system, we should be abandoning it in favor of one that works.

Our failure to protest these ads and publicize the truth is grim testimony to how far America has fallen from its origins and ideals. In the first colonial settlements at Jamestown and Plymouth, colonists learned the bitter lesson that entitlement was not a viable basis for civilization and work was necessary for survival. We are in the process of re-learning that lesson very slowly and painfully.

DRI-259 for week of 2-2-14: Kristallnacht for the Rich: Not Far-Fetched

An Access Advertising EconBrief:

Kristallnacht for the Rich: Not Far-Fetched

Periodically, the intellectual class aptly termed “the commentariat” by The Wall Street Journal works itself into frenzy. The issue may be a world event, a policy proposal or something somebody wrote or said. The latest cause célèbre is a submission to the Journal’s letters column by a partner in one of the nation’s leading venture-capital firms. The letter ignited a firestorm; the editors subsequently declared that Tom Perkins of Kleiner Perkins Caulfield & Byers “may have written the most-read letter to the editor in the history of The Wall Street Journal.”

What could have inspired the famously reserved editors to break into temporal superlatives? The letter’s rhetoric was both penetrating and provocative. It called up an episode in the 20th century’s most infamous political regime. And the response it triggered was rabid.

“Progressive Kristallnacht Coming?”

“…I would call attention to the parallels of fascist Nazi Germany to its war on its “one percent,” namely its Jews, to the progressive war on the American one percent, namely “the rich.” With this ice breaker, Tom Perkins made himself a rhetorical target for most of the nation’s commentators. Even those who agreed with his thesis felt that Perkins had no business using the Nazis in an analogy. The Wall Street Journal editors said “the comparison was unfortunate, albeit provocative.” They recommended reserving Nazis only for rarefied comparisons to tyrants like Stalin.

On the political Left, the reaction was less measured. The Anti-Defamation League accused Perkins of insensitivity. Bloomberg View characterized his letter as an “unhinged Nazi rant.”

No, this bore no traces of an irrational diatribe. Perkins had a thesis in mind when he drew an analogy between Nazism and Progressivism. “From the Occupy movement to the demonization of the rich, I perceive a rising tide of hatred of the successful one percent.” Perkins cited the abuse heaped on workers traveling Google buses from the cities to the California peninsula. Their high wages allowed them to bid up real-estate prices, thereby earning the resentment of the Left. Perkins’ ex-wife Danielle Steele placed herself in the crosshairs of the class warriors by amassing a fortune writing popular novels. Millions of dollars in charitable contributions did not spare her from criticism for belonging to the one percent.

“This is a very dangerous drift in our American thinking,” Perkins concluded. “Kristallnacht was unthinkable in 1930; is its descendant ‘progressive’ radicalism unthinkable now?” Perkins point is unmistakable; his letter is a cautionary warning, not a comparison of two actual societies. History doesn’t repeat itself, but it does rhyme. Kristallnacht and Nazi Germany belong to history. If we don’t mend our ways, something similar and unpleasant may lie in our future.

A Short Refresher Course in Early Nazi Persecution of the Jews

Since the current debate revolves around the analogy between Nazism and Progressivism, we should refresh our memories about Kristallnacht. The name itself translates loosely into “Night of Broken Glass.” It refers to the shards of broken window glass littering the streets of cities in Germany and Austria on the night and morning of November 9-10, 1938. The windows belonged to houses, hospitals, schools and businesses owned and operated by Jews. These buildings were first looted, then smashed by elements of the German paramilitary SA (the Brownshirts) and SS (security police), led by the Gauleiters (regional leaders).

In 1933, Adolf Hitler was elevated to the German chancellorship after the Nazi Party won a plurality of votes in the national election. Almost immediately, laws placing Jews at a disadvantage were passed and enforced throughout Germany. The laws were the official expression of the philosophy of German anti-Semitism that dated back to the 1870s, the time when German socialism began evolving from the authoritarian roots of Otto von Bismarck’s rule. Nazi officialdom awaited a pretext on which to crack down on Germany’s sizable Jewish population.

The pretext was provided by the assassination of German official Ernst vom Rath on Nov. 7, 1938 by a 17-year-old German boy named Herschel Grynszpan. The boy was apparently upset by German policies expelling his parents from the country. Ironically, vom Rath’s sentiments were anti-Nazi and opposed to the persecution of Jews. Von Rath’s death on Nov. 9 was the signal for release of Nazi paramilitary forces on a reign of terror and abduction against German and Austrian Jews. Police were instructed to stand by and not interfere with the SA and SS as long as only Jews were targeted.

According to official reports, 91 deaths were attributed directly to Kristallnacht. Some 30,000 Jews were spirited off to jails and concentration camps, where they were treated brutally before finally winning release some three months later. In the interim, though, some 2-2,500 Jews died in the camps. Over 7,000 Jewish-owned or operated businesses were damaged. Over 1,000 synagogues in Germany and Austria were burned.

The purpose of Kristallnacht was not only wanton destruction. The assets and property of Jews were seized to enhance the wealth of the paramilitary groups.

Today we regard Kristallnacht as the opening round of Hitler’s Final Solution – the policy that produced the Holocaust. This strategic primacy is doubtless why Tom Perkins invoked it. Yet this furious controversy will just fade away, merely another media preoccupation du jour, unless we retain its enduring significance. Obviously, Tom Perkins was not saying that the Progressive Left’s treatment of the rich is now comparable to Nazi Germany’s treatment of the Jews. The Left is not interning the rich in concentration camps. It is not seizing the assets of the rich outright – at least not on a wholesale basis, anyway. It is not reducing the homes and businesses of the rich to rubble – not here in the U.S., anyway. It is not passing laws to discriminate systematically against the rich – at least, not against the rich as a class.

Tom Perkins was issuing a cautionary warning against the demonization of wealth and success. This is a political strategy closely associated with the philosophy of anti-Semitism; that is why his invocation of Kristallnacht is apropos.

The Rise of Modern Anti-Semitism

Despite the politically correct horror expressed by the Anti-Defamation Society toward Tom Perkins’ letter, reaction to it among Jews has not been uniformly hostile. Ruth Wisse, professor of Yiddish and comparative literature at HarvardUniversity, wrote an op-ed for The Wall Street Journal (02/04/2014) defending Perkins.

Wisse traced the modern philosophy of anti-Semitism to the philosopher Wilhelm Marr, whose heyday was the 1870s. Marr “charged Jews with using their skills ‘to conquer Germany from within.’ Marr was careful to distinguish his philosophy of anti-Semitism from prior philosophies of anti-Judaism. Jews “were taking unfair advantage of the emerging democratic order in Europe with its promise of individual rights and open competition in order to dominate the fields of finance, culture and social ideas.”

Wisse declared that “anti-Semitism channel[ed] grievance and blame against highly visible beneficiaries of freedom and opportunity.” “Are you unemployed? The Jews have your jobs. Is your family mired in poverty? The Rothschilds have your money. Do you feel more secure in the city than you did on the land? The Jews are trapping you in the factories and charging you exorbitant rents.”

The Jews were undermining Christianity. They were subtly perverting the legal system. They were overrunning the arts and monopolizing the press. They spread Communism, yet practiced rapacious capitalism!

This modern German philosophy of anti-Semitism long predated Nazism. It accompanied the growth of the German welfare state and German socialism. The authoritarian political roots of Nazism took hold under Otto von Bismarck’s conservative socialism, and so did Nazism’s anti-Semitic cultural roots as well. The anti-Semitic conspiracy theories ascribing Germany’s every ill to the Jews were not the invention of Hitler, but of Wilhelm Marr over half a century before Hitler took power.

The Link Between the Nazis and the Progressives: the War on Success

As Wisse notes, the key difference between modern anti-Semitism and its ancestor – what Wilhelm Marr called “anti-Judaism” – is that the latter abhorred the religion of the Jews while the former resented the disproportionate success enjoyed by Jews much more than their religious observances. The modern anti-Semitic conspiracy theorist pointed darkly to the predominance of Jews in high finance, in the press, in the arts and running movie studios and asked rhetorically: How do we account for the coincidence of our poverty and their wealth, if not through the medium of conspiracy and malefaction? The case against the Jews is portrayed as prima facie and morphs into per se through repetition.

Today, the Progressive Left operates in exactly the same way. “Corporation” is a pejorative. “Wall Street” is the antonym of “Main Street.” The very presence of wealth and high income is itself damning; “inequality” is the reigning evil and is tacitly assigned a pecuniary connotation. Of course, this tactic runs counter to the longtime left-wing insistence that capitalism is inherently evil because it forces us to adopt a materialistic perspective. Indeed, environmentalism embraces anti-materialism to this day while continuing to bunk in with its progressive bedfellows.

We must interrupt with an ironic correction. Economists – according to conventional thinking the high priests of materialism – know that it is human happiness and not pecuniary gain that is the ultimate desideratum. Yet the constant carping about “inequality” looks no further than money income in its supposed solicitude for our well-being. Thus, the “income-inequality” progressives – seemingly obsessed with economics and materialism – are really anti-economic. Economists, supposedly green-eyeshade devotees of numbers and models, are the ones focusing on human happiness rather than ideological goals.

German socialism metamorphosed into fascism. American Progressivism is morphing from liberalism to socialism and – ever more clearly – honing in on its own version of fascism. Both employed the technique of demonization and conspiracy to transform the mutual benefit of free voluntary exchange into the zero-sum result of plunder and theft. How else could productive effort be made to seem fruitless? How else could success be made over into failure? This is the cautionary warning Perkins was sounding.

The Great Exemplar

The great Cassandra of political economy was F.A. Hayek. Early in 1929, he predicted that Federal Reserve policies earlier in the decade would soon bear poisoned fruit in the form of a reduction in economic activity. (His mentor, Ludwig von Mises, was even more emphatic, foreseeing “a great crash” and refusing a prestigious financial post for fear of association with the coming disaster.) He predicted that the Soviet economy would fail owing to lack of a functional price system; in particular, missing capital markets and interest rates. He predicted that Keynesian policies begun in the 1950s would culminate in accelerating inflation. All these came true, some of them within months and some after a lapse of years.

Hayek’s greatest prediction was really a cautionary warning, in the same vein as Tom Perkins’ letter but much more detailed. The 1945 book The Road to Serfdom made the case that centralized economic planning could operate only at the cost of the free institutions that distinguished democratic capitalism. Socialism was really another form of totalitarianism.

The reaction to Hayek’s book was much the same as reaction to Perkins’ letter. Many commentators who should have known better have accused both of them of fascism. They also accused both men of describing a current state of affairs when both were really trying to avoida dystopia.

The flak Hayek took was especially ironic because his book actually served to prevent the outcome he feared. But instead of winning the acclaim of millions, this earned him the scorn of intellectuals. The intelligentsia insisted that Hayek predicted the inevitable succession of totalitarianism after the imposition of a welfare state. When welfare states in Great Britain, Scandinavia, and South America failed to produce barbed wire, concentration camps and German Shepherd dogs, the Left advertised this as proof of Hayek’s “exaggerations” and “paranoia.”

In actual fact, Great Britain underwent many of the changes Hayek had feared and warned against. The notorious “Rules of Engagements,” for instance, were an attempt by a Labor government to centrally control the English labor market – to specify an individual’s work and wage rather than allowing free choice in an impersonal market to do the job. The attempt failed just a dismally as Hayek and other free-market economists had foreseen it would. In the 1980s, it was Hayek’s arguments, wielded by Prime Minister Margaret Thatcher, which paved the way for the rolling back of British socialism and the taming of inflation. It’s bizarre to charge the prophet of doom with inaccuracy when his prophecy is the savior, but that’s what the Left did to Hayek.

Now they are working the same familiar con on Tom Perkins. They begin by misconstruing the nature of his argument. Later, if his warnings are successful, they will use that against him by claiming that his “predictions” were false.

Enriching Perkins’ Argument

This is not to say that Perkins’ argument is perfect. He has instinctively fingered the source of the threat to our liberties. Perkins himself may be rich, but argument isn’t; it is threadbare and skeletal. It could use some enriching.

The war on the wealthy has been raging for decades. The opening battle is lost to history, but we can recall some early skirmishes and some epic brawls prior to Perkins.

In Europe, the war on wealth used anti-Semitism as its spearhead. In the U.S., however, the popularity of Progressives in academia and government made antitrust policy a more convenient wedge for their populist initiatives against success. Antitrust policy was a crown jewel of the Progressive movement in the early 1900s; Presidents Theodore Roosevelt and William Howard Taft cultivated reputations as “trust busters.”

The history of antitrust policy exhibits two pronounced tendencies: the use of the laws to restrict competition for the benefit of incumbent competitors and the use of the laws by the government to punish successful companies for various political reasons. The sobering research of Dominick Armentano shows that antitrust policy has consistently harmed consumer welfare and economic efficiency. The early antitrust prosecution of Standard Oil, for example, broke up a company that had consistently increased its output and lowered prices to consumers over long time spans. The Orwellian rhetoric accompanying the judgment against ALCOA in the 1940s reinforces the notion that punishment, not efficiency or consumer welfare, was behind the judgment. The famous prosecutions of IBM and AT&T in the 1970s and 80s each spawned book-length investigations showing the perversity of the government’s claims. More recently, Microsoft became the latest successful firm to reap the government’s wrath for having the temerity to revolutionize industry and reward consumers throughout the world.

The rise of the regulatory state in the 1970s gave agencies and federal prosecutors nearly unlimited, unsupervised power to work their will on the public. Progressive ideology combined with self-interest to create a powerful engine for the demonization of success. Prosecutors could not only pursue their personal agenda but also climb the career ladder by making high-profile cases against celebrities. The prosecution of Michael Milken of Drexel Burnham Lambert is a classic case of persecution in the guise of prosecution. Milken virtually created the junk-bonk market, thereby originating an asset class that has enhanced the wealth of investors by untold billions or trillions of dollars. For his pains, Milken was sent to jail.

Martha Stewart is a high-profile celebrity who was, in effect, convicted of the crime of being famous. She was charged and convicted of lying to police about a case in which the only crime could have been the offense of insider-trading. But she was the trader and she was not charged with insider-trading. The utter triviality and absence of any damage to consumers or society at large make it clear that she was targeted because of her celebrity; e.g., her success.

Today, the impetus for pursuing successful individuals and companies today comes primarily from the federal level. Harvey Silverglate (author of Three Felonies Per Day) has shown that virtually nobody is safe from the depredations of prosecutors out to advance their careers by racking up convictions at the expense of justice.

Government is the institution charged with making and enforcing law, yet government has now become the chief threat to law. At the state and local level, governments hand out special favors and tax benefits to favored recipients – typically those unable to attain success on their own efforts – while making up the revenue from the earned income of taxpayers at large. At the federal level, Congress fails in its fundamental duty and ignores the law by refusing to pass budgets. The President appoints czars to make regulatory law, while choosing at discretion to obey the provisions of some laws and disregard others. In this, he fails his fundamental executive duty to execute the laws faithfully. Judges treat the Constitution as a backdrop for the expression of their own views rather than as a subject for textual fidelity. All parties interpret the Constitution to suit their own convenience. The overarching irony here is that the least successful institution in America has united in a common purpose against the successful achievers in society.

The most recent Presidential campaign was conducted largely as a jihad against the rich and successful in business. Mitt Romney was forced to defend himself against the charge of succeeding too well in his chosen profession, as well as the corollary accusation that his success came at the expense of the companies and workers in which his private-equity firm invested. Either his success was undeserved or it was really failure. There was no escape from the double bind against which he struggled.

It is clear, than, that the “progressivism” decried by Tom Perkins dates back over a century and that it has waged a war on wealth and success from the outset. The tide of battle has flowed – during the rampage of the Bull Moose, the Depression and New Deal and the recent Great Recession and financial crisis – and ebbed – under Eisenhower and Reagan. Now the forces of freedom have their backs to the sea.

It is this much-richer context that forms the backdrop for Tom Perkins’ warning. Viewed in this panoramic light, Perkins’ letter looks more and more like the battle cry of a counter-revolution than the crazed rant of an isolated one-percenter.