DRI-247 for week of 10-12-14: Rhyming History in the Middle East

An Access Advertising EconBrief:

Rhyming History in the Middle East

The historian George Santayana is best known for the comment that “those who do not remember the past are condemned to repeat it.” Another commentator amended Santayana by noting that, while history does not repeat itself literally, “it does rhyme.” Anybody literate in world history can cock an ear to the sound of developments in the Middle East today and recognize the rhymes.

Pedagogy is the art of focused oversimplification. Movies are a painless way of teaching history because they focus our attention through dramatic depiction on a visual canvas. Two famous movies provide a narrative paradigm for our Middle East historical rhyme.

Khartoum: The Death of “Chinese” Gordon and the Fall of the Sudan to Islamic Fanaticism

The 1966 film Khartoumdepicts one of Great Britain’s most famous military debacles: the capture of the Sudan from Egypt by forces led by Muhammad Ahmad, the Islamic religious fanatic known as “the Mahdi” (the “chosen one”). At first blush, no association with Great Britain is visible here, since Sudan was a protectorate of Egypt and part of the Turkish Ottoman empire – Sudan, Egypt and Turkey were not a part of the British Commonwealth. But Great Britain had long exercised influence in the Middle East for various reasons, and that influence extended to the use of military force in the region. One of the most famous of all British soldiers, General Charles G. Gordon, was Governor General of the Sudan and led the defense of its capital of Khartoum at the time of the invasion. Gordon’s death may have been the worst public-relations disaster ever suffered by British arms.

(1)Gordon was a professional soldier-for-hire, having served the Empire of China in putting down the Taiping Rebellion during one of the famous Opium Wars in the early 1860s. His talents included not only skill as a military commander but also formidable engineering skills. He also served with distinction in India and the Congo. Prior to the Islamic revolt, Gordon had served a previous stint as Governor General in the Sudan, during which he had played a key role in suppressing the slave trade. This was the culmination in Great Britain’s century-long crusade against slavery. It was fitting that Gordon, world-renowned as an evangelical Christian (albeit non-denominational in his belief), should have delivered this coup de grace to the institution of slavery. It was Christianity, alone among the world’s great religions, which had spearheaded the fight against slavery across the globe.

According to the movie, Gordon returned to the Sudan in early 1884 at the request of British Prime Minister William Gladstone. Gladstone was responding to public pressure created when another British professional soldier, Col. William Hicks, led an ill-fated expeditionary force of 10,000 Egyptian soldiers against the forces of the Mahdi. The Islamic leader had organized an army in revolt against Egyptian and Turkish rule in the Sudan, urging his followers to kill Turks. The movie’s narrator blames Hicks for failing to grasp the “one essential feature of the [Sudanese] desert: its immensity. The Mahdi led him on and on.” At the opportune moment, the Mahdists attacked and annihilated Hicks and his forces. In fact, the incompetence and indifference of the Egyptian military was also a factor in its defeat.

Can we hear the rhyme? For over 60 years, the United States has intervened militarily in countries that it did not own or officially protect. These included Korea, Vietnam, Cuba, Grenada, the Dominican Republic, Panama, Kuwait, Iraq, Afghanistan and Pakistan. In none of these cases was war declared, yet U.S. military forces were deployed and saw action. In each large-scale conflict, local forces were under the command of U.S. officers. The ability and spirit of the natives were questioned and success hinged at least partially on their efforts. In essence, then, the United States today is playing a role directly analogous to that played by Great Britain in the 19th century.

(2)An outcry to “avenge Hicks” prompted Gladstone to review his options. Gladstone was a liberal in the 19th-century sense of the word. He believed in free international trade as a medium to encourage peace among nations. He knew full well that British investors had a financial stake in the region and that the Suez Canal was considered a vital strategic interest of Great Britain. But Gladstone was viscerally reluctant to use the military power of the British government to bail out private investors. In the movie’s most chillingly prescient moment, he utters the line: “Britain will not assume the obligation to police the world.”

Can we hear the rhyme? That line of movie dialogue reverberates in our ears today. Every policy discussion of every undeclared war involving us has raised the specter of the U.S. as “world policeman.” In 1884, there was no concept of international law and no international agency with the mission of enforcing it, let alone a mandate to deter or punish international aggression. After World War II, though, the World Court and the United Nations existed. Yet the U.S. still found itself intervening in conflict after conflict and maintaining a military presence in most of the countries of the world. To be sure, this is a function of the ineffectual, corrupt status of the U.N. and the dubious legitimacy of “international law” in the realm of national aggression. Even so, no thoughtful American can watch Khartoumwithout shuddering at this point.

(3)The movie Gladstone escapes his dilemma, at least temporarily. The British public is clamoring for Gordon to be sent back to the Sudan, the scene of one of his greatest triumphs. Queen Victoria herself supports this plan. (The Egyptians welcome the return of Gordon, whom they regard reverentially as a potential savior.) Gladstone’s advisor, Foreign Secretary Granville, suggests that Gordon be sent to the Sudan but put on a tight leash, with orders only to evacuate British and European military and civilian personnel. Gladstone objects that Gordon, notorious for keeping his own counsel, will exceed his orders and do his best to “embroil the government” in war. Another advisor, army officer J.D.H. Stewart, interjects that if Gordon were sent to the Sudan without powers or army, he could not hope to stand up to the Mahdi. “He would simply fail.” At which point, Granville ripostes, “What a pity.” Gladstone understands that Granville is implying that Gordon be set up to fail as a scapegoat for an administration that is unwilling to take responsibility for employing the necessary military force.

Can we hear the rhyme? Once again, the reverberations from this exchange are painful to our modern ears. In Korea, the U.N. – but acting as a token front for a venture under U.S. leadership and predominantly staffed by U.S. forces – began by prosecuting a so-called “police action” rather than a full-fledged war. What was the result of this moderate, carefully calibrated, half-hearted military action? The U.N. forces came within an eyelash of being pushed off the Korean peninsula by North Korean forces (aided by Chinese and Russian advisors and materiel). Only when the U.S. dropped all pretense of fighting a police action and began to fight a war of annihilation against North Korea did our forces turn the tide of battle, first against North Korea and later against the combined North Korean and Chinese forces.

Later in Vietnam, this scenario repeated itself. The U.S. began by sending military advisors under the Eisenhower and Kennedy administrations. This half-measure proved inadequate against a North Vietnam aided by China and the Soviet Union, so the Johnson administration began a policy of gradual escalation of hostilities. Bombing of Viet Cong sanctuaries and, later, North Vietnam itself were a key part of this strategy. But only when U.S. forces annihilated the Viet Cong in South Vietnam during the Tet Offensive did we emerge militarily victorious. We were victorious in the sense that the Viet Cong were eliminated and North Korean forces were evicted from the South. But just as Gordon’s first victory in the Sudan did not prove decisive, so did our military victory in Vietnam not prove lasting. Our support was withdrawn from South Vietnam while Communist support was maintained in the North, and South Vietnam could not stand on its own against the subsequent North Vietnamese invasion.

In Kuwait, our military victory was swift and decisive, since the volunteer U.S. army was far superior to any other military force in the world. But rather than eliminate the enemy, we stopped short of proceeding to Baghdad and deposing Saddam Hussein.

In the Iraq invasion of 2003, the initial military effort was once again irresistibly successful. But subsequent terrorist activities, based in surrounding nations, were treated using police tactics rather than full-fledged, rapid military force. Only when a change in leadership produced a “surge” in military force did the U.S. succeed in retaking its lost ground, much as it had in Korea and Vietnam earlier. Then the U.S. withdrew its forces – and within a short time, the opposition had reformed and reasserted itself.

Now the U.S. has returned to the Middle East again – to fight Islamic fanaticism, again. We are proposing a moderate strategy of limited involvement, using bombing as a gradual step with the possibility of using ground troops later. Again.

We should note that the movie’s conference between Gladstone and his advisors was a dramatic invention by Khartoum‘s screenwriter Robert Ardrey. (Ardrey was highly respected in Hollywood and received an Oscar nomination for this screenplay. He eventually left Hollywood to write best-selling books on anthropology [!].) Among the numerous complications left out of the movie is the army actually sent to Egypt prior to Gordon’s arrival. But the movie’s implication of vacillation and inconsistency in the Gladstone administration’s military actions is fully justified.

(4)Six months after the fall of Khartoum, the Mahdi himself died of typhus. He has provided a line of succession, however, and his successors continued to stir up trouble in the Sudan for years after his demise. This, coupled with the political furor caused by Gordon’s death and the abortive and futile military actions by the Gladstone administration, eventually motivated Great Britain to send another army to the Sudan. This one was commanded by an expert in desert warfare, General Kitchener, who had been a major serving in the Sudan during Gordon’s tenure. Kitchener’s forces destroyed the Mahdist army at the famous battle of Omdurman in 1898. Eventually, the Sudan attained independence from Egypt and freedom from the orbit of Great Britain.

Can we hear the rhyme? In Korea, moderation in the pursuit of war led to a stalemate that has lasted for a half-century. In Vietnam, it led to a Communist victory after American military victory. In Iraq, it left Saddam Hussein free to create havoc and necessitate our eventual return to Iraq – an outcome analogous to the British return to the Sudan. And now the U.S. has returned to the Middle East again to fight the same old aggressor – Islamic fanaticism – with a new name.

Douglas MacArthur said “In war, there is no substitute for victory.” Victory consists of complete subjugation of the enemy by the destruction of its means to fight and the surrender of its political authority. The attainment of these objectives requires a declaration of war and prosecution of war with a single-minded and wholehearted devotion to those ends. The U.S. has abandoned both of these principles, much as did Great Britain in the 19th and 20th centuries.  

 

(5)When Gordon arrived in the Middle East, his first move was to fortify his position by enlisting allies among local governments. The movie shows him visiting a neighboring sultan and former slaver named “Zobeir Pasha.” Gordon asks for aid against the Mahdi, whom Gordon depicts as a common enemy. This request is hindered by the fact that Gordon had ordered the execution of Zobeir Pasha’s son during his war against the slave trade. Sure enough, Gordon meets with a stony refusal. This movie interlude is a stand-in for various real-life efforts by Gordon to build a coalition against the Mahdi – efforts that enjoyed little success.

Can we hear the rhyme? In every major military conflict of the last 60 years, the U.S. has faced the task of recruiting local support. In most (but not all) cases, this has entailed appealing to local tribes and factions. In Vietnam, for example, the U.S. was able to recruit the Montagnards, mountain tribesmen who were bitter enemies of the Viet Cong dating back to the time when they were called the Viet Minh. Unfortunately, winning over the “hearts and minds” of the remaining South Vietnamese population was a tougher job that took most of the war to accomplish. Distinguishing between friendly or neutral South Vietnamese and hostile Viet Cong was one of the biggest day-to-day headaches plaguing U.S. troops. In Iraq, most of the publicity surrounding the Bush administration’s agonizing struggle against terrorist counterattacks has focused on tribal and ethnic feuds between Sunni, Shiites, Kurds and other sects and factions.

(6)The biggest set-piece scene in Khartoumand the movie’s dramatic highlight is the climactic battle scene, in which Mahdist forces invade the city and butcher the inhabitants. The death of Gordon himself is a memorable scene based on a famous painting by George Joy, entitled “Death of Gordon.” Gordon is shown descending the staircase from his office residence in the midst of the battle for the city. Such was the awe commanded by his presence among friend and foe alike that the battle stops dead for a few eerie seconds as the Mahdists pay the infidel devil his due by allowing him to descend a few steps unmolested. Then a soldier launches a spear into Gordon’s chest and the great soldier falls off the steps in perfect imitation of the painting. Acting in violation of the Mahdi’s personal injunction, his men behead Gordon and carry the head in triumph on a spear across the city. On Gordon’s lips, it was said, was an ironic smile.

Although the locale of Gordon’s death was apparently correct, the battle itself was another dramatic invention by Ardrey. A local official betrayed Gordon and the Egyptians by allowing the Mahdists nocturnal access to the city, vitiating the necessity of launching an assault. Estimates of the ensuing slaughter vary from a conservative 10,000 to a more expansive 30,000. Interestingly, the movie shifts the locus of local corruption to economics; the local official hoards grain and sells it on the outside. When his corruption is discovered, Gordon orders his execution.

Can we hear the rhyme? A recurring theme in U.S. military conflicts has been that the people on whose behalf we are ostensibly fighting reject our help or even line up against us. In Korea, the early-arriving troops on the Korean peninsula noted that North Korean troops could blend in with the local population so well that pursuit was especially frustrating. In Vietnam, numerous stories of GIs betrayed and booby-trapped by locals made American troops wary and trigger-happy in their interactions with the South Vietnamese. In Iraq, the kickoff of American intervention with the subjugation of the country created a climate of mistrust that lasted for the remainder of the U.S. occupation. This creates the anomalous picture of an American military purportedly serving a noble, altruistic cause but in practice having to convince the beneficiary or even browbeat him to fight off opposition. What accounts for this picture, let alone its repetition?

The common factor is rebellion or revolt against the established order. Great Britain in the 19th century and the U.S. in the 20th century found themselves defending the established order against change. In a rebellion, it is often difficult to tell friend from foe and one never knows when one may become the other. In the movie, there is a key scene when Gordon boldly infiltrates the Mahdi’s camp, accompanied only by his friend and servant, Khaleel. While there, he learns that the Mahdi’s intention is to attack Khartoum and massacre all opponents – indeed, to conquer the entire Ottoman Empire, massacre all Turkish opposition and create an Islamic empire on the world stage. This meeting never took place – it was inserted to bolster the movie’s position that Gladstone should have decisively intervened militarily in support of Gordon and against the Mahdi. In other words, the Mahdi was portrayed as more than just a local rebel. He was an international aggressor. Gordon knew this and was a hero for single-handedly resisting him and warning the world. Gladstone displayed political cowardice in ignoring this warning – or so the movie contends.

This same theme resounded throughout U.S. military interventions. In Korea and Vietnam, Communism was the international aggressor. There were certainly good grounds for adopting this stance, since modern Communist doctrine vacillated between the export of international revolution a la Lenin and the more cautious doctrine of “socialism in one country.” But even after the collapse of Communism at the close of the 20th century, the doctrine of international aggression was preserved as justification for military action.

The Practical Value of the Middle East Rhyme

To be useful, historical rhyme must not only present a discernible pattern. It must also point the way to a desirable plan of action. It is one thing to suggest that the U.S. has fallen victim to the same political temptations as did Great Britain before her, for largely the same reasons. Of what practical value is this knowledge?

Our analysis suggests that both Great Britain and the U.S. were trying to do what Las Vegas gamblers would call “making their point the hard way.” It is one thing to say “I find this state of affairs deplorable and I want to see it changed.” That does not make the statement “I will change it using the means I propose” necessarily correct. The next EconBrief will explore the reasons why both these great powers found it so excruciatingly difficult to effect change using military force. Not surprisingly, those reasons are economic. It is even less surprising that the better plan would be to deploy economic logic rather than boots on the ground. A recent op-ed by the noted Latin American economist and political advisor Hernando De Soto points the way.

DRI-228 for week of 10-5-14: Can We Afford the Risk of EPA Regulation?

An Access Advertising EconBrief:

Can We Afford the Risk of EPA Regulation?

Try this exercise in free association. What is first brought to mind by the words “government regulation?” The Environmental Protection Agency would be the answer of a plurality, perhaps a majority, of Americans. Now envision the activity most characteristic of that agency. The testing of industrial chemicals for toxicity, with a view to determining safe levels of exposure for humans, would compete with such alternative duties as monitoring air quality and mitigating water pollution. Thus, we have a paradigmatic case of government regulation of business in the public interest – one we would expect to highlight regulation at its best.

One of the world’s most distinguished scientists recently reviewed EPA performance in this area. Richard Wilson, born in Great Britain but long resident at HarvardUniversity, made his scientific reputation as a pioneer in the field of particle physics. In recent decades, he became perhaps the leading expert in nuclear safety and the accidents at Three Mile Island, Chernobyl and Fukushima, Japan. Wilson is a recognized leader in risk analysis, the study of risk and its mitigation. In a recent article in the journal Regulation (“The EPA and Risk Analysis,” Spring 2014), Wilson offers a sobering explanation of “how inadequate – and even mad and dangerous – the U.S. Environmental Protection Agency’s procedures for risk analysis are, and why and how they must be modified.”

Wilson is neither a political operative nor a laissez-faire economist. He is a pure scientist whose credentials gleam with ivory-tower polish. He is not complaining about excesses or aberrations, but rather characterizing the everyday policies of the EPA. Yet he has forsworn the dispassionate language of the academy for words such as “mad” and “dangerous.” Perhaps most alarming of all, Wilson despairs of finding anybody else willing to speak publicly on this subject.

The EPA and Risk 

The EPA began life in 1970 during the administration of President Richard Nixon. It was the culmination of the period of environmental activism begun with the publication of Rachel Carson’s book Silent Spring in 1962. The EPA’s foundational project was the strict scrutiny of industrial society for the risks it allegedly posed to life on Earth. To that end, the EPA proposed “risk assessment and regulations” for about 20 common industrial solvents.

How was the EPA to assess the risks of these chemicals to humans? Well, standard scientific procedure called for laboratory testing that isolate the chemical effects from the myriad of other forces impinging on human health. There were formidable problems with this approach, though. For one thing, teasing out the full range of effects might take decades; epidemiological studies on human populations are commonly carried out over 10 years or more. Another problem is that human subjects would be exposed to considerable risk, particularly if dosages were amped up to shorten the study periods.

The EPA solved – or rather, addressed – the problem by using laboratory animals such as rats and mice as test subjects. Particularly in the beginning, few people objected when rodents received astronomically high dosages of industrial chemicals in order to determine the maximum level of exposure consistent with safety.

Of course, everybody knew that rodents were not comparable to people for research purposes. The EPA addressed that problem, too, by adjusting their test results in the simplest ways. They treated the results applicable to humans as scalar multiples of the rodent results, with the scale being determined by weight. They assumed that the chemicals were linear in their effects on people, rather than (say) having little or no effect up to a certain point or threshold. (A linear effect would be infinitesimally small with the first molecule of exposure and rise with each subsequent molecule of exposure.)

Of all the decisions made by EPA, none was more questionable than the standard it set for allowable risk from exposure to toxic chemicals. The standard set by EPA was no more than one premature death per million of exposed population over a statistical lifetime. Moreover, the EPA also assumed the most unfavorable circumstances of exposure – that is, that those exposed would receive exposure daily and get the level of exposure that could only be obtained occupationally by workers routinely exposed to high levels of the substance. This maximum safe level of exposure was itself a variable, expressed as a range rather than a single point, because the EPA could not assume that all rats and mice were identical in their response to the chemicals. Here again, the EPA assumed the maximum degree of uncertainty in reaction when calculating allowable risk. As Wilson points out, if the EPA had assumed average uncertainty instead, this would have reduced their statistical risk to about one in ten million.

It is difficult for the layperson to evaluate this “one out of a million” EPA standard. Wilson ties to put it in perspective. The EPA is saying that the a priori risk imposed by an industrial chemical should be roughly equivalent to that imposed by smoking two cigarettes in an average lifetime. Is that a zero risk? Well, not in the abstract sense, but it will do until something better comes along. Wilson suggests that the statistical chance of an asteroid hitting Earth is from 100 to 1000 times greater than this. There are several chemicals found in nature, including arsenic and mercury, whose risk of death to man is each about 1,000 times greater than this EPA-stipulated risk. Still astronomically small, mind you – but vastly greater than the arbitrary standard set by the EPA for industrial chemicals.

Having painted this ghastly portrait of your federal government at work, Wilson steps back to allow us a view of the landscape that the EPA is working to alter. There are some 80,000 industrial chemicals in use in the U.S. Of these, about 20 have actually been studied for their effects on humans. Somewhere between 10,000 and 20,000 chemicals have been tested on lab animals using methods liThat means that, very conservatively speaking, there are at least 60,000 chemicals for which we have only experience as a guide to their effects on humans.

What should we do about this vast uncharted chemical terrain? Well, we know what the EPA has done in the past. A few years ago, Wilson reminds us, the agency was faced with the problem of disposing of stocks of nerve gas, including sarin, one of the most deadly of all known substances. The agency conducted a small test incineration and then looked at the resulting combustion products. When it found only a few on its list of toxic chemicals, it ignored the various other unstudied chemicals among the byproducts and dubbed the risk of incineration to be zero! It was so confident of this verdict that it solicited the forensic testimony of Wilson on its behalf – in vain, naturally.

Wilson has now painted a picture of a government agency gripped by analytical psychosis. It arrogates to itself the power to dictate safety to us, imposes unreal standards of safety on chemicals it studies – them arbitrarily assumes that unstudied chemicals are completely safe! Now we see where Wilson’s words “mad and dangerous” came from.

Economists who study government should be no more surprised by the EPA’s actions than by Wilson’s horrified reaction to them. The scientist reacts as if he were a child who has discovered for the first time that his parents are capable of the same human frailties as other humans. “Americans deserve better from their government. The EPA should have a sound, logical and scientific justification for its chemical exposure regulations. As part of that, agency officials need to accept that they are sometimes wrong in their policymaking and that they need to change defective assessments and regulations.” Clearly, Wilson expects government to behave like science – or rather, like science is ideally supposed to behave, since science itself does not live up to its own high standards of objectivity and honesty. Economists are not nearly that naïve.

The Riskless Society

Where did the EPA’s standard of no more than one premature death per million exposed per statistical lifetime come from? “Well, let’s face it,” the late Aaron Wildavsky quipped, “no real man tells his girlfriend that she is one in a hundred thousand.” Actually, Wildavsky observes, “the real root of ‘one in a million’ can be traced to the [government's] efforts to find a number that was essentially equivalent to zero.” Lest the reader wonder whether Wilson and Wildavsky are peculiar in their insistence that this “zero-risk” standard is ridiculous, we have it on the authority of John D. Graham, former director of the Harvard School of Public Health’s Center for Risk Analysis, that “No one seriously suggested that such a stringent risk level should be applied to a[n already] maximally exposed individual.”

Time has also been unkind to the rest of EPA’s methodological assumptions. Linear cancer causation has given way to recognition of a threshold up to which exposure is harmless or even beneficial. This gibes with the findings of toxicology, in which the time-honored first principle is “the dose makes the poison.” It makes it next-to-impossible to gauge safe levels of exposure using either tests on lab animals or experience with low levels of human exposures. As Wildavsky notes, it also helps explain our actual experience over time, in which “health rates keep getting better and better while government estimates of risk keep getting worse and worse.”

During his lifetime, political scientist Aaron Wildavsky was the pioneering authority on government regulation of human risk. In his classic article “No Risk is the Highest Risk of All,” The American Scientist, 1979, 67 (1) 32-37) and his entry on the “Riskless Society” in the Fortune Encyclopedia of Economics (1993, pp. 426-432), Wildavsky produced the definitive reply to the regulatory mentality that now grips America in a vise.

Throughout mankind’s history, human advancement has been spearheaded by technological innovation. This advancement has been accompanied by risk. The field of safety employs tools of risk reduction. There are two basic strategies for risk reduction. The first is anticipation. The EPA, and the welfare state in general, tacitly assume this to be the only safety strategy. But Wildavsky notes that anticipation is a limited strategy because it only works when we can “know the quality of the adverse consequence expected, its probability and the existence of effective remedies.” As Wildavsky dryly notes, “the knowledge requirements and the organizational capacities required to make anticipation an effective strategy… are very large.”

Fortunately, there is a much more effective remedy close at hand. “A strategy of resilience, on the other hand, requires reliance on experience with adverse consequences once they occur in order to develop a capacity to learn from the harm and bounce back. Resilience, therefore, requires the accumulation of large amounts of generalizable resources, such as organizational capacity, knowledge, wealth, energy and communication, that can be used to craft solutions to problems that the people involved did not know would occur.” Does this sound like a stringent standard to meet? Actually, it shouldn’t. We already have all those things in the form of markets, the very things that produce and deliver our daily bread. Markets meet and solve problems, anticipated and otherwise, on a daily basis.

Really, this is an old problem in a new guise. It is the debate between central planning – which assumes that the central planners already know everything necessary to plan our lives for us – and free competition – which posits that only markets can generate the information necessary to make social cooperation a reality. Wildavsky has put the issue in political and scientific terms rather than the economic terms that formed the basis of the Socialist Calculation debates of the 1920s and 30s between socialists Oskar Lange and Fred Taylor and Austrian economists Ludwig von Mises and F. A. Hayek. The EPA is a hopelessly outmoded relic of central planning that not only fails to achieve its objectives, but threatens our freedom in the bargain.

In “No Risk is the Highest Risk of All,” Wildavsky utilizes the economic concept of opportunity cost to make the decisive point that by utilizing resources inefficiently to drive one particular risk all the way to zero, government regulators are indirectly increasing other risks. Because this tradeoff is not made through the free market but instead by government fiat, we have no reason to think that people are willing to bear these higher alternative risks in order to gain the infinitesimally small additional benefits of driving the original risk all the way to zero. As a purely practical matter, we can be sure that this tradeoff is wildly unfavorable. The EPA bans an industrial chemical because it does not meet their impossibly high safety standard. Businesses across the nation have to utilize an inferior substitute. This leaves the businesses, their shareholders, employees and consumers poorer, with less real income to spend on other things. Safety is a normal good, something people and businesses spend more on when their real incomes rise and less on when real incomes fall. The EPA’s foolish “zero-risk” regulatory standard has created a ripple effect that reduces safety throughout the economy.

The Proof of the Safety is in the Living

Wildavsky categorically cited the “wealth to health” linkage as a “rule without exception. To get a concrete sense of this transformation in the 20th century, we can consult the U. S. historical life expectancy and mortality tables. In the century between 1890 and 1987, life expectancy for white males rose from 42.5 years to 72.2 years; for non-whites, from 32.54 years to 67.3 years. For white females, it rose from 44.46 years to 78.9 years; for non-white females, from 35.04 years to 75.2 years. (Note, as did Wildavsky, that the longevity edge enjoyed by females over males came to exceed that enjoyed by white males over non-whites.)

Various diseases were fearsome killers at the dawn of the 20th century, but petered out over the course of the century. Typhoid fever killed an average of 26.7 people per 100,000 as the century turned (from 1900-04); by 1980 it had been virtually wiped out. Communicable diseases of childhood (measles, scarlet fever, whooping cough and diphtheria) carried away 65.2 out of every 100,000 people in the early days of the century but, again by 1980, they had been virtually wiped out. Pneumonia used to be called “the old man’s friend” because it was the official cause of so many elderly deaths, which is why 161.5 out of every 100,000 deaths were attributed to it during 1900-04. But this number had plummeted to 22.0 by 1980. Influenza caused 22.8 deaths out of every 100,000 during 1900-04, but the disease was near extinction in 1980 with only 1.1 deaths ascribed to it. Tuberculosis was another lethal killer, racking up 184.7 deaths per 100,000 on average in the early 1900s. By 1980, the disease was on the ropes with a death rate of only 0.8 per 100,000. Thanks to antibiotics, appendicitis went from lethal to merely painful, with a death rate of merely 0.3 per 100,000 people. Syphilis went from scourge of sexually transmitted diseases to endangered-species of same, going from 12.9 deaths per 100,000 to 0.1.

Of the major causes of death, only cancer and cardiovascular disease showed significant increase. Cancer is primarily a disease of age; the tremendous jump in life expectancy meant that many people who formerly died of all the causes listed above now lived to reach old age, where they succumbed to cancer. That is why the incidence of most diseases fell but why cancer deaths increased. “Heart failure” is a default listing for cause of death when the proximate cause is sufficient to cause organ failure but not acute enough to cause death directly. That accounts for the increase in cardiovascular deaths, although differences in lifestyle associated with greater wealth also bear part of the blame for the failure of cardiovascular deaths to decline despite great advances in medical knowledge and technology. (In recent years, this tendency has begun to reverse.)

The activity-linked mortality tables are also instructive. The tables are again expressed as a rate of fatality per 100,000 people at risk, which can be translated into absolute numbers with the application of additional information. By far the riskiest activity is motorcycling, with an annual death rate of 2,000 per 100,000 participants. Smoking lags far behind at 300, with only 120 of these ascribable to lung cancer. Coal mining is the riskiest occupation with 63 deaths per 100,000 participants, but it has to share the title with farming. It is riskier a priori to drive a motor vehicle (24 deaths per 100,000) than to be a uniformed policeman (22 deaths). Roughly 60 people per year are fatally struck by lightning. The lowest risk actually calculated by statisticians is the 0.000006 per 100,000 (six-millionths of one percent) risk of dying from a meteorite strike.

It is clear that risk is not something to be avoided at all cost but rather an activity that provides benefits at a cost. Driving, coal mining and policing carry the risk of death but also provide broad-based benefits not only to practitioners but to consumers and producers. Industrial chemicals also provide widespread benefits to the human race. It makes no sense to artificially mandate a “one in a million” death-risk for industrial solvents when just climbing in the driver’s seat of a car subjects each of us to a risk that is hundreds of thousands of times greater than that. We don’t need all-powerful government pretending to regulate away the risk associated with human activities while actually creating new hidden risks. We need free markets to properly price the benefits and costs associated with risk to allow us to both efficiently run risks and avoid them.

This fundamental historical record has been replicated with minor variations across the Western industrial landscape. It was not achieved by heavy-duty government regulation of business but by economic growth and markets, which began to slow as the welfare state and regulation began to predominate. Ironically, recent slippage in health and safety has been associated with the transfer of public distrust from government – where it is well-founded – to science. Falling vaccination rates has produced revival of diseases, such as measles and diphtheria, which had previously been nearly extinct.

The Jaundiced Eye of Economists

If there is any significant difference in point of view between scientists (Wilson) and political scientists (Wildavsky) on the one hand, and economists on the other, it is the willingness to take the good faith of government for granted. Wilson apparently believes that government regulators can be made to see the error of their ways. Wildavsky apparently viewed government regulators as belonging to a different school of academic thought (“anticipation vs. resilience”) – maybe they would see the light when exposed to superior reasoning.

Economists are more practical or, if you like, more cynical. It is no coincidence that government regulatory agencies do not practice good science even when tasked to do so. They are run by political appointees and funded by politicians; their appointees are government employees who are paid by political appropriations. The power they possess will inevitably be wielded for political purposes. Most legal cases are settled because they are too expensive to litigate and because one or both parties fear the result of a trial. Government regulatory agencies use their power to bully the private sector into acquiescence with the political results favored by politicians in power. Private citizens fall in line because they lack the resources to fight back and because they fear the result of an administrative due process in which the rules are designed to favor government. This is the EPA as it is known to American businesses in their everyday world, not as it exists in the conceptual realities of pure natural science or academic political science.

The preceding paragraph describes a kind of bureaucratic totalitarianism that differs from classical despotism. The despot or dictator is a unitary ruler, while the bureaucracy wields a diffused form of absolute power. Nevertheless, this is the worst outcome associated with the EPA and top-down federal-government regulation in general. The risks of daily life are manageable compared to the risks of bad science dictated by government. And both these species of risk pale next to the risk of losing our freedom of action, the very freedom that allows us to manage the risks that government regulation does not and cannot begin to evaluate or lessen.

The EPA is just too risky to have around.

DRI-275 for week of 9-28-14: Touchdown-Celebration Prayer: Time for Separation of Church and Red Zone?

An Access Advertising EconBrief:

Touchdown-Celebration Prayer: Time for Separation of Church and Red Zone?

Fans of the National Football League (NFL) have become inured to the spectacle of celebrations conducted by players who score a touchdown. These actions have assumed a variety of forms, ranging from ordinary excesses of joy and enthusiasm like jumping up and down to esoteric rituals like spiking or dunking the football over the goalpost. Perhaps the most common form is some sort of gyration or celebratory dance. The practice originated among certain players whose fame depended at least as much on their self-promotional zeal as upon their athletic prowess – Deion Sanders, formerly of the Dallas Cowboys, comes particularly to mind.

Older readers will appreciate the striking contrast between this modern attitude and that exhibited by legendary stars of yesteryear like Jim Brown of the Cleveland Browns and Johnny Unitas of the Baltimore Colts. Brown, who may have been the greatest running back of all time, was slow to assume his stance prior to the center snap of the football and even slower to rise after being tackled when running the ball. His demeanor was impassive. He conserved his energy and saved his exertions for the time between the snap and the referee’s whistle signaling the end of a play. Did this account for the fact that his average-yards-gained per carry was the highest of any Hall of Fame runner?

Unitas was similarly deadpan on the field. As quarterback for the Colts, he terrified opponents and awed teammates with the knack for leading his team from behind in the closing seconds of a game. But fans could never have guessed by looking at him whether he had just been sacked for a loss or thrown the winning touchdown pass as time expired. If any of his teammates had ever done anything as gauche as celebrating a long run or spectacular catch, they would have been frozen solid by the icy stare known throughout the NFL as the “Unitas look.”

In the so-called “greatest football game ever played” – the 1958 NFL championship game between the Baltimore Colts and the New York Giants – Unitas provided the prelude to victory by completing a daring sideline pass to tight end Jim Mutcheller in the Giants’ one-yard line in sudden-death overtime. At the post-game press conference, a reporter ventured to question Unitas’s play-calling decision: “That was a pretty dangerous pass, wasn’t it? What if it had been intercepted?” The reporter was the first televised victim of “the look.” “When you know what you’re doing,” Unitas replied without needing to raise his voice, “they’re not intercepted.”

Nowadays many players feel obligated to supplement the audio and visual record of play supplied by television by advertising what has just happened. The newest wrinkle on this style of irrepressible self-expression is praying in the end zone after scoring a touchdown.

The Abdullah Case and Ensuing Fallout

In the fourth quarter of a game between the Kansas City Chief and New England Patriots at Arrowhead Stadium on September 29, 2014, New England quarterback Tom Brady completed a pass to Kansas City safety Husein Abdullah. Abdullah traversed the 39 yards to the New England end zone, where he dropped to his knees in prayer.

End-zone touchdown celebrations are now so commonplace that rules have been drafted to cover them. One of those rules forbids celebrating while “on the ground.” The referees invoked this rule, penalizing the Chiefs 15 yards on the ensuing kickoff for “unsportsmanlike conduct.”

That did not end the matter, though. Two days later, the NFL’s league office announced that the official decision had been in error. Why? It seems that “there are exceptions made for religious expressions,” according to NFL vice-president for football communications Michael Signora. But the referees may have been confused by Abdullah’s body language; he slid on his knees rather than simply kneeling down. Probably sensing an opportune moment, the well-known organization CAIR (Council on American-Islamic Relations) lodged an objection to the original ruling. According to an article in the Kansas City Star (“NFL Admitting Error on Abdullah Flag,” October 1, 2014, by Tod Palmer), “Abdullah is a devout Muslim.” The CAIR spokesman urged the league office to “clarify the policy” so as to “avoid the appearance of a double standard” for Muslims and non-Muslims.

The sensitivities of Americans have been abraded by over a half-century of controversy over the separation of church and state. Now the debate over public religious observance has invaded the football field or, more specifically, the end zone. Will theologians have to be on call for replay decisions by officials? Should the NFL nail a thesis on the separation of church and red zone to the main gate of its stadiums? Is all this really necessary?

The Economics of Player Celebration 

Does associating end-zone prayer with celebration seem odd? Abdullah himself referred to his action as “prostrat[ing] myself to God.” Still, the religious faithful at their devotions are often called “celebrants.” In any case, the attributes of prayer and those of celebration are virtually identical in this particular context, which allows us to apply economic principles to both types of action. Both interrupt the normal flow of play and divert attention away from the game and to the celebrant. A case exists that each kind of action might either please or annoy a football fan.

One interesting thing about this example is the diametric tacks taken by the economist and the non-economist. The non-economist feels compelled to ascertain whether prayer itself is “good” or “bad.” A particularly discriminating non-economist might put that to one side and focus on whether or not prayer is a good thing in this particular context; e.g., on a football field with hundreds of millions of spectators. The economist may or may not feel qualified to supply answers to those questions, but does not care about the answers because they needn’t be answered by any particular individual. Markets exist to answer questions that individuals cannot or should not answer. 

Professional football is an intangible product supplied by the National Football League and its member franchises (teams) to consumers (fans). That product consists primarily, but not solely, of competitive athletic performance. A rhetorical question posed previously in this space asked: If O. J. Simpson were still in full flower of his athletic skills, would he be working as a running back in the NFL, all other things equal? The obvious answer is no, because football fans do not want to watch murderers play professional football, no matter how talented they may be.

The advent of touchdown celebration allows us to add another qualifying example to our definition of the pro-football product. To the degree that some fans enjoy and even encourage end-zone celebrations, it is clear that they derive satisfaction (or utility, in economic jargon) from this practice. That means that the pro-football product is defined as “competitive athletic performance plus entertainment.”

This is not merely an ad hoc formulation cobbled together by an economist for a column. In the same edition of the same Sports section of the Kansas City Star as the story of the NFL’s recantation of the penalty on Abdullah, the adjacent story is a profile of Chiefs’ cornerback Sean Smith. Study Smith’s comments about his flamboyant style of play and the attitude of Chiefs’ coaches to the on-field exhibition of his personality.

“‘I think (the Miami game) gave the coaches a chance to see that when I’m able to go out there and just be myself and let my personality hang out there, not only do I play well, but people feed off my energy,’ Smith said.” [Quoting reporter Terez A. Paylor] “‘Smith, like his other more animated teammates, appreciates Coach Andy Reid’s philosophy. He encourages his players to play with passion and let their personalities shine through on the field, and Smith has embraced that approach this season.'”[Back to Smith again] “‘Coach emphasizes to let your personality show, go out there and cut loose, and be yourself and have fun…That’s something I definitely took personal. I’ve been a very enthusiastic guy. I like going out there and having fun and putting a smile on people’s faces.'”

This constitutes an implicit endorsement by a player and head coach, as cited by a beat reporter, of the economic model developed above.

Does this mean that end-zone celebrations are a good thing? Does it mean that players have a right to indulge them? Does it justify the NFL’s policy? Or condemn it? The answers to these questions are various forms of “no.” End-zone celebrations are one more input into the productive process, no better or worse a priori than any other. They may or may not be appropriate. Players have no “right” to indulge in them because players do not control the production process – the team does. The NFL is the franchisor; it has the right to control end-zone celebrations only if they affect its ability to provide the right competitive environment for the teams and not when only team profitability is at stake.

A last key question may be the one most frequently asked when this issue arises in public controversy. What about the player’s “right” of free religious observance?

Why Freedom of Religion Does Not Guarantee the Right to Celebrate in the End Zone 

Freedom is defined as the absence of external constraint. It does not guarantee the power to achieve one’s aims over opposition; in particular, it does not confer rights. A right can be enjoyed only when it does not abrogate the exercise of somebody else’s right. A contract is a voluntary agreement that imposes legal duties on both (all) parties to it.

These definitions lay the groundwork for our understanding of prayer in the end zone.

Husein Abdullah is an employee of the Kansas City Chiefs football team. He helps produce professional football entertainment but he does not control the mix of inputs into that product. The team decides who the other players will be, what style of football the team will play, what offensive plays the team will run, what defensive sets the team will employ, who the coaches, assistant coaches and trainers will be. If the team chooses all these inputs into the production of professional football entertainment, why should it not also control the nature of end-zone celebrations? Of course, the team may opt for spontaneity by giving free rein to players’ imaginations, just as conventional entertainers in show business may opt for improvisation over a scripted performance. Still, the team will almost certainly forbid players from celebrating by making obscene gestures to opposing players, revealing intimate body parts to fans and performing other acts virtually guaranteed to offend fans rather than entertaining them.

So we should hardly be astonished if the team should choose to regulate an action as potentially sensitive or embarrassing as an act of religious observance – should we? And, speaking as students of economic logic, we can make no objection to that – can we?

How about Husein Abdullah? Or, for that matter, any religious celebrant of any religious denomination? Is he being treated unfairly? Are his rights being violated?

No. As an employee of the team, Abdullah works at the direction of the team and for its benefit. The fact that Abdullah is engaging in a religious observance in this particular case is irrelevant. Abdullah certainly has freedom of religion. He has freedom of speech, too, but that doesn’t give him the right to say anything and everything under the sun in his capacity as an employee with no fear of repercussion.

Suppose Abdullah were an employee working in an office building. Does he have the “right” to pray at the top of his lungs while wandering around and between the desks of his fellow employees? No, he has no right to disrupt the workplace in this fashion even with the excuse that freedom of religion allows him the right of religious observance. Similarly, his “right” to pray in the end zone is circumscribed by team policy.

Does this mean that the Abdullahs of the world are inevitably booked for disappointment in their longing to prostrate themselves before God in the end zone? There is no reason to think so. We know, for instance, that celebrations were once frowned upon and suppressed yet are now practically de rigeur. There seems no way to predict what twists and turns this penchant for celebration will take because there is no way to predict how the tastes of the public will change.

Are we afraid that “discrimination” against unpopular minority groups (Muslims, for example) will proliferate? No, we are not, because in this context the term discrimination loses its familiar colloquial meaning. There is no arbitrary exercise of power against a group because no business has a duty to employ all inputs to an equal degree. Instead, businesses have a duty to their owners and consumers to employ inputs based on productivity precisely by discriminating in favor of the more productive and against the less productive. Whether the inputs are engaging in religious observance, speech or any other activity does not matter. If a player can produce a productive form of celebration, this will make money for his team and provide the player with a celebratory meal ticket. If not, the player will lose the privilege of celebrating in the end zone. Business is not about what the boss wants or what employees want – it is about what consumers want. Economists characterize this principle as consumer sovereignty.

If a player demands a right to pray in the end zone, what he is really demanding is not freedom, nor is an exercise of a valid right. Rather, it is the power to abrogate his duty to his employer at whim. As often emphasized in this space, this confusion of freedom and power suffered by the general public has been repeatedly exploited to political advantage by the left wing.

The Absurd Position in Which the NFL Finds Itself

The framework for analysis outlined above is simple and logical. It is an outgrowth of the system by which we divide labor to produce and exchange goods and services. The pellucid clarity of this system stands out in brilliant contrast to the existing framework under which the NFL currently operates.

The NFL currently has rules governing player celebrations. These rules are part of the code that governs play on the field. Violations are punished with penalties such as the one Abdullah earned for the Chiefs. Consequently, the rules must be mastered, interpreted and applied by the referees. Inevitably, as with all sports decisions made by referees or umpires, subjective perceptions and interpretations cause mistakes and controversy. (The distinction between kneeling and sliding to his knees probably reminded Abdullah of the judging on Dancing With the Stars.) Meanwhile, the entities whose interests are most directly affected – team ownership and management – must sit back and await the chance to appeal any wrongful decision later.

And the fans – the people for whose benefit the system operates – don’t get any direct say in this administrative process. Whereas in a competitive market, input from fans directly determines the nature and extent of player celebrations, the regulated market gives immediate control to the administrative mechanism of the NFL. This allows the entertainment part of the product to contaminate the competitive part when penalties are levied for unsportsmanlike conduct, whereas under a competitive system the team handles problems of unsuitable celebration outside of the context of the competitive contest.

That’s not all to object to about top-down regulation of end zone celebration by the NFL. In fact, it may not even be the worst. The Abdullah case illustrates the political hazards of the top-down approach. The NFL began by wanting to suppress inappropriate celebration, which is surely not objectionable in and of itself. By doing the regulating itself instead of leaving it to the market, the NFL left itself open to the pressures of every special interest with an ax to grind. Because the NFL has no special interest in the profits of any one team, it has no incentive to favor popular celebration. Because the NFL is a bureaucratic organization, it is open to influence by every special interest with an ax to grind, CAIR being the most recent to step up to the grinder.

Suddenly, the NFL finds it can’t simply ban a form of celebration it doesn’t approve of (by “any player on the ground”) because that would run afoul of “religious observance.” Imagine – religious observance interfering with the conduct of a football game, when previously the only thing the two had in common was Sunday. And the minute the NFL starts making an exception for “religious observance,” it then has to confront the issue of different – and conflicting – religions. Wonderful – the two things attendees at a dinner party are never supposed to mention are politics and religion, and both are now elbowing their way into the end zone. What next? Will Stars of David start popping up on player helmets as an expression of their “right of free speech?” If only the fans had the power to throw a flag against the NFL for interference!

The General Principle at Work Here 

Americans have forgotten the value of allowing markets to decide basic questions. A recent Wall Street Journal op-ed commented offhandedly that we have lost confidence in free markets as a result of the Great Recession. If so, this is a monumental irony, since that event was caused by the interference with and subordination of the market process. It is not clear how much of the current attitude originates with a loss of faith and how much with simple ignorance. Regardless of the source, we must reverse this attitude to have any hope of survival, let alone prosperity. We know markets work because the world in general and the U.S. in particular would never have reached their present state of prosperity unless markets were as effective as free-market economists claim they are. The pretense that regulated, administrative markets are a vehicle for perfect “social justice” is not merely a sham – it is a recipe for tyranny. Administrators possess neither the comprehensive information nor the omniscient sense of fairness necessary to decide whose celebrations to allow, which ones to ban and what standard to apply to all.

The best thing about the example of touchdown celebrations is that they provide a side-by-side illustration of free markets and regulated administrative markets. The free market is player celebrations as they evolved in recent years, encouraged by fan response and governed by individual teams. The Kansas City Star excerpts show in so many words that this market exists and the evidence of our senses shows that this market works just as economic logic predicts that it will. And our ever-more-dismal experience with top-down, bureaucratic NFL regulation shows that rule by fiat and by ventriloquists in the chattering classes is an escalating failure.

What about the older fans who are appalled by player celebrations and long for the good old days of strong, silent, heroic players like Brown and Unitas? Why, we’ll just have to find a team that suits our tastes – or found one.

DRI-271 for week of 9-21-14: The NFL and Domestic Violence: Too Much Action or Not Enough?

An Access Advertising EconBrief:

The NFL and Domestic Violence: Too Much Action or Not Enough?

Two recent highly publicized cases of domestic violence involving current National Football League players have attracted reams – nowadays, “mega-pixels” might be more apropos – of publicity. In addition to the cases themselves, controversy has swirled around the issue of action taken, or not taken, by the NFL itself in response to the incidents. What is the responsibility of the league in these cases?

As always, economics has much to offer in answer to these questions.

The Bare Facts of the Ray Rice and Adrian Peterson Cases

Both of the cases involve star running backs, All-Americans in college and All-Pro caliber performers during their NFL careers.

Ray Rice was a star rusher who accumulated the second-highest total rushing yardage of any Baltimore Ravens running back during his career. On March 27, 2014, he was indicted for third-degree aggravated assault for punching his fiancée in an elevator and knocking her unconscious. His subsequent conviction and lenient sentencing on this charge actually attracted less publicity than did a videotape of the incident that showed him delivering the punch and dragging the apparently unconscious woman from the elevator. This videotape was delivered to NFL security by a law-enforcement officer and then released by the website TMZ. The resulting adverse publicity had two effects: the NFL changed its “player conduct policy” and Rice’s contract with the Ravens was terminated on September 8, 2014. Meanwhile, Rice’s fiancée had become his wife.

Since leaving college in 2007 and joining the Minnesota Vikings, Adrian Peterson has established himself as one of the NFL’s leading running backs. In 2012, he missed breaking Eric Dickerson’s all-time single-season NFL rushing record by a mere nine yards.

His personal life has been as turbulent as his professional life has been productive. His father was a convicted drug dealer. In 2013, he discovered the existence of his two-year-old son, then living with the boy’s mother and her current boyfriend – only to lose him weeks later after the boy was allegedly assaulted by the boyfriend.

On September 11, 2014, Peterson was indicted by a grand jury for allegedly beating his four-year-old son with a tree branch on May 18 of this year, injuring the boy’s legs, back, ankles, buttocks and genitals. The charge was “negligent injury to a child.” Initially, Peterson was suspended for one game by the Vikings. On September 17, 2014, Peterson was placed on the NFL Commissioner’s Exempt/Permission List, requiring him to “remain away from all team activities.” The Vikings have given indications that he does not fit into their future plans.

The Public Controversy

Some scandals explode out of nowhere like building with a gas leak. Others blow up as the predictable culmination of accumulating circumstances, like a cache of dynamite reaching the end of its lit fuse. Then there are those that accumulate like an avalanche that begins with a boulder and snowballs. The last category fits the domestic violence scenario, in which public condemnation gradually rose to a crescendo. Spokesmen and spokeswomen for various organizations opposing domestic violence serially rose to denounce the actions of Rice and demand that something be done about them and him. Print and broadcast media mouthpieces formed a chorus echoing those sentiments. Politicians put their ears to the wind and sensed a sound-bite opportunity. “If the NFL doesn’t police themselves,” Sen. Kirsten Gillibrand (D-NY) courageously declared, “we will be looking more into it.” “We,” of course, referred to the Senate, sixteen of whose members then forwarded a demand that NFL Commissioner Roger Goodell establish a “zero tolerance” policy toward domestic violence.

Commissioner Goodell proved to be the lightning rod for most of the public criticism, thus reinforcing the suspicion that the doctrine of free will and individual responsibility is a dead letter in contemporary American society. The vocally indignant were apparently alluding to the NFL’s “conduct policy,” instituted on April 10, 2007. It applied to off-field behavior of players, coaches and front-office personnel but excluded illicitdrug and performance enhancement matters, which are covered by a separate policy. Between 2007 and 2011, seven players were disciplined under the policy in eight separate actions. (One player was reprimanded twice.) Five of the actions were taken in response to criminal convictions or allegations, one for general misbehavior and two for unspecified conduct. The stated purpose of the policy was to “improve the league’s image.”

Mr. Goodell addressed Rice’s behavior in a press conference last week. But, as Wall Street Journal columnist Holman Jenkins put it (“Way Beyond the NFL’s Competence,” WSJ, September 24, 2014), Goodell apparently “said the wrong thing, or failed to say the right thing, or said the right thing the wrong way – or something.” According to CBSSports.com: Goodell was guilty of “not nailing the moment.” The Los Angeles Times convicted the Commissioner of not “get[ting] it.” The National Organization of Women escalated the charging contest by demanding Goodell’s resignation. This must represent a new high – or low – in the evolving doctrine of corporate responsibility. The league commissioner is supposed to resign because a player’s spat with his girlfriend cum wife gets out of hand.

Commentators Weigh In

Sober voices eventually began to be heard. Joseph Epstein, arguably America’s leading essayist, rightly accused the finger-shakers and fist-pounders of “moral preening” (“Blitzing the NFL With Moral Preening,” The Wall Street Journal, September 22, 2014). “Politicians…university psychologists…media colleagues…the people [the scandal] will make feel good are those who get to pronounce upon it… expressing shock, moral outrage, dudgeon to the highest power.” It provides them “a splendid opportunity… to exhibit their own high and irreproachable virtue.”

Unfortunately, Mr. Epstein’s analysis of the problem itself exhibits the same shortcomings he displayed with his retrograde, liberal take on the violence in Ferguson, MO. “Should anyone be shocked at the irrefutable evidence of domestic violence in the NFL?” No, he concludes, because the players are “men who make their living through violence, and for whom violence well-executed has made millionaires of nearly all of them… The weekly paycheck of Adrian Peterson… is near $700,000.”

Mr. Epstein’s leftish envy of free-market outcomes was now breaking loose. He gave it free rein. “To be a star athlete in America is to grow up… with no one… ever saying no to you. Fame, money, women come rolling in for these athletes, the favorites, or so it sometimes seems, at least while they are still young, of the gods.” Now Mr. Epstein was positively green with envy. At least he was venting his spleen in the right direction – but with his gall bladder rather than his brain cells.

“When someone does say no… is it all that shocking that the athletes respond with violence? I do not say it is right… only [that] it’s not shocking. What is shocking is that there isn’t a lot more of it.” Now it’s out of his system. It’s the old liberal line – the system is the “root cause” of individual misbehavior, while the miscreants are helpless victims, acted upon rather than independent actors.

Mr. Epstein’s failure to distinguish between uncontrolled domestic violence and limited violence within acceptable and desirable constraints is simply inexcusable. It was once commonplace to equate veterans with out-of-control wackos. 1930s musicals would moan “they gave him a gun” and imply that “society” had only itself to blame for any antecedent fiasco, from bank robbery to murder. Knute Rockne’s view, that college football was invaluable competitive training for a future in a competitive society, seems a lot closer to the mark than Epstein’s nonsense.

Like Mr. Epstein’s “civil rights” analysis of the Ferguson episode and his call for a black leader to soothe the savage breasts of the unruly natives of Ferguson, his domestic-violence explanation is a throwback to the liberal pieties of the 1960s. The “root cause” thinking of that bygone era is as dead as the big-government, welfare-state approach to social policy. Not only is top-down management of human behavior demonstrably ineffective, it is also a recipe for moral nihilism. The failure of an acute social critic like Mr. Epstein measures the depth of our morass.

As is so often the case, Holman Jenkins provided a fresh breeze of thinking on the issue. “It’s been decades since police and courts gave a pass to wife-beaters. Mr. Rice was hauled before a grand jury; given the video evidence he might well have gotten the full five-year sentence…[but the state of New Jersey] seems to have seriously applied the criteria for its first timers’ leniency program, in which the victim, Mr. Rice’s now-wife [emphasis added], was allowed an important say… Obviously, an alleged refusal to face up to domestic violence is not the problem here… Domestic violence is a common form of violence for a reason: People fight with those they know. This creates dilemmas for the justice system absent when stranger assaults stranger – dilemmas even a $40 million-a-year league president might struggle to resolve to the satisfaction of any but the shallowest of media shouters.”

Jenkins notes ironically that virtually every full-length discussion of domestic violence in the NFL “segu[es] to those problems that football faces that actually pertain to football [such as] concussion.” He might have added drug use and performance enhancement as well.

The Economics of the Domestic Violence Scandal

In 1956, Milton Friedman authored a classic article entitled “The Social Responsibility of Business Is to Increase Its Profits.” His thesis seemed to be perfectly encapsulated in the title. As usual, though, it was widely misinterpreted. The political Left accused Friedman of saying that only profit matters and all other human values are and should be irrelevant. But Friedman was arguing the economic case for specialization. Business firms exist for the specific purpose of creating goods and services. Although he did not cite it, Friedman could have referred to the previous classic 1937 article by Ronald Coase, “The Nature of the Firm.” Coase deduced that business firms spring up when something is too costly for a household to produce internally. Extending the principle, a business firm produces those things whose internal cost of production is lower than its external cost of purchase. Virtually everything listed under the heading of the “social responsibility of business” is something too costly for business to produce internally because it does not specialize in doing it. Curing the problem of domestic violence surely fits under this heading.

The frustration shown by the Left toward this laissez-faire stance implies that we are giving up on our problems. But that is far from the truth; indeed, it is the opposite of the truth. By assigning the solution of a problem to the agency best equipped to solve it, we are making the best use of the scarce resources available to solve problems – thereby maximizing problem solution. And the NFL’s domestic-violence conundrum is a case in point.

The NFL is a business. It produces a kind of entertainment product called “professional football competition.” The business form it uses is called franchising, a popular method utilized by many American icons like McDonald’s. The NFL’s franchises are called teams; these include the Baltimore Ravens and Minnesota Vikings for whom Rice and Peterson played and play, respectively.

What should the NFL – the franchisor - “do” about the “problem” of domestic violence among its players? (Or, for that matter, among its coaches or front-office personnel?) Nothing. Domestic violence is not a problem for the NFL, the franchisor. The NFL’s job is to enable its franchises to provide the best possible product to fans, who are the consumers of its product. What about the NFL’s “image?” The NFL’s image depends on how well it does its job of supporting its franchises and how good their product is. In that regard, it is just as important for the NFL to refrain from doing bad things as it is for it to do good things. The NFL should not waste its time and money trying to solve problems that it cannot solve and which are better solved by others.

But the fact that domestic violence committed by players is not a problem for the NFL does not mean that it might not be a problem for the particular team that employs the erring player. The word “might” is the operative one; it reinforces the rationale for excluding action by the league. The NFL does not, and cannot, know whether the particular episode is a problem for the team or not. That is a decision for the team to make, not the league office. The NFL does not run its franchises; it does not make the day-to-day, profit-and-loss, operational decisions for team management. Only the team is legally entitled and circumstantially qualified to make those decisions. This decision is one more operational decision for the team to make. In the case of Ray Rice, the Baltimore Ravens made it by deciding to terminate Rice’s contract.

We can easily envision a player’s union advocate representing Rice objecting to that decision in language like this: “Rice’s actions may have been unfortunate, but he faced legal sanction and paid for his crime. This has nothing to do with his ability to perform on the football field and therefore does not justify the termination of his contract.” That hypothetical case, seductive though it may seem at first hearing, is quite wrong. Ray Rice, and every other professional football player, is not merely an athletic performer. The product he produces is entertainment, and it includes more than mere athletic performance. It also includes a standard of behavior and image acceptable to the public in an athletic performer. The fact that this standard is subjective does not detract a whit from its reality.

O.J. Simpson immediately stopped appearing in movies when he was charged with murdering Nicole Simpson. Had he still been playing football, had he still retained his youthful athletic skill, his football career would nonetheless have reached an immediate close. People do not want to watch a murderer act in movies. In the 1940s and early 50s, a substantial minority of actors and actresses lost the ability to act in motion pictures produced by major U.S. studios, although they still could act on Broadway and abroad. Americans did not want to watch Communists act in movies.

The subjective line with respect to domestic violence is much less clear, but it obviously exists. We are willing to tolerate some measure of violence in domestic relations among athletes, but there are limits to it. Who decides what the limits are? The free market, which means the people directly affected by it. Those people are the consumers of the product athletes produce – the fans – and the producers of that product – the teams. Fans express their views at the ticket office and by direct contact with the team. The team acts in accordance with their view of short- and long-term profit, based on the reactions of fans and the athletic prowess of the player. This is the system calculated to produce the best possible football product for consumers. That will achieve the best outcome not only for the NFL but for consumers and producers as well. The rest of us have no stake in the matter.

Wait a minute – what about Mrs. Rice? She has an obvious stake. But Mrs. Rice’s interests were served by the agency best equipped to serve them – the criminal justice system. She had her day in court and even had her views prevail when Rice was given a lenient sentence. The busybodies of the media are actually arguing to overrule her and impose extra penalties on Rice over and above those dictated by law and the team. In essence, they are applying the “helpless victim” codicil of Joseph Epstein’s “root cause” hypothesis. Here, it is Mrs. Rice who is the helpless victim in need of the all-wise counsel and direction gratuitously provided by the blabbermouth class, who specialize in telling the rest of the world how to run their lives.

Is the Adrian Peterson case special because it involves a child? Well, it is certainly special in the legal sense, since the child’s presumed inability to act as his own advocate in a way analogous to that of Mrs. Rice argues for government involvement. But those special considerations don’t introduce any factors conducing to involvement by the NFL. The need for careful consideration by the team is still present, even enhanced, by the possibility of child abuse.

What about the NFL’s drug and performance-enhancement policy? Does this violate the doctrine of specialization a la Friedman and Coase? No. Here the NFL is arbitrating the issue of competitive balance between teams, an appropriate action for a franchisor. For example, franchisors such as McDonald’s routinely award franchises by providing geographic separation between franchisees to limit competition between them. They want franchisees to compete with other franchisors – Hardees, Burger King and Wendy’s – but not with each other. Similarly, the NFL does not want some teams to gain a competitive edge by employing players who use steroids or human-growth hormone while others feel compelled to respect the wishes of fans by banning use of those substances by their players. Of course, it is still up to the NFL to adopt a wise and effective policy – but the policy is not objectionable a priori.

Domestic Violence Reconsidered

The most recent entry in the domestic violence op-ed derby is revealing. In “A Better Way to Reduce Domestic Violence in the NFL,” author Richard J. Gelles estimates the statistical expectation for acts of substantive domestic violence among 2,016 males between 21 and 39 years of age – the demographic base of NFL players. Assume that 90% of these players are in relationships with women. About 4% of these relationships would produce an act of domestic violence annually. That would be about 80 cases. This would lead to about 20 arrests. But not all of these acts, or arrests, would be perpetrated by the male – maybe 10-20 would be female-caused.

This puts a different face on the current hysteria about domestic violence in the NFL, even if we include the aggravated assault accusation against Jonathan Dwyer of the Arizona Cardinals and the accusation against the Chicago Bears’ Brandon Marshall, which goes all the way back to 2006. Suddenly, we are not confronted with an epidemic demanding emergency action but an age-old problem meriting careful consideration.

Alas, Gelles – a sociologist – offers two “solutions,” neither one within shouting distance of cogency. The first, recourse to a “Case Review Committee” to arbitrate domestic disputes, is best applied by the principals with interposition by agencies like the NFL. The second is even sillier. Gelles wants “professional sports [to] apply sanctions judiciously.” We might call this the “Spike Lee” solution: “Do the right thing.” He helpfully explains that suspending Ray Rice for a third of a season “would be appropriate” without providing the general rule that makes it appropriate. This is worse than useless.

The beginning of wisdom on this issue was broached by Jenkins when he observed that “people fight with those they know.” Consider an example that provides a reasonable parallel to the NFL case.

In the late 1920s and 1930s, contract bridge was a craze in the United States. As inconceivable as it might seem today, bridge was front-page news. The great popularizer of bridge, Ely Culbertson, organized a challenge team match with his principal competitor for public favor, Sidney Lenz. For days, the running tally of the 1931-32 Culbertson-Lenz match was reported in the press, on radio and on neon billboards in Times Square. Over the years, bridge retained its popularity as the nation’s favorite card game, surpassing poker. Culbertson’s successor, Charles Goren, appeared on the cover of Time Magazine in the 1950s.

Throughout this reign of popularity, there was a link between bridge and domestic discord between husbands and wives. This was popularly recognized and wryly treated by humorists and the movies. This good-natured acceptance flew in the face of occasional violent outbursts such as the famous Bennett case in Kansas City, MO, in 1929. Mrs. Bennett was so outraged and frustrated by her husband’s incapable display as her bridge partner that his culminating depredation, failure to land a four-spade contract at their regular bridge game, drove her ballistic – she pulled a pistol and shot him through the door of the bathroom to which he had frantically retreated. It is not clear to what extent Ely Culbertson’s straight-faced analysis of Mr. Bennett’s mistakes as a declarer caused the jury to acquit Mrs. Bennett. This seems to be the precursor of our modern tendency to balance distaste at domestic violence with a demand for competitive excellence.

Women bridge players have vastly outnumbered men. Yet only one husband and wife partnership has represented a country in the Bermuda Bowl, the international world team championship that has been played since 1950. (They were not notably successful.) Traditionally and notoriously, husbands and wives have found it very difficult to sustain a long-running successful partnership in top-level competitive bridge. Carrying the principle that familiarity breeds contempt even further, long-running partnerships in general are historically rare in bridge, despite the demonstrated competitive advantage accruing to established partnerships over short-duration combinations.

In the bridge world, a few experts have legendary reputations for their gentlemanly demeanor and politeness to opponents. (The opposite is more nearly the rule in top-level bridge.) Yet these players have usually found it difficult or impossible to play harmoniously with their wives. One of these world-famous gentlemen was the perpetrator of an explosion at the bridge table in which he astonished hundreds of onlookers by yelling at his wife: “And to think that this woman is the mother of my children!” Their successful and long-running partnerships have been with men. Even these are hard to sustain. Again, it needs to be stressed that this is the rule rather than the exception over some 90 years of stressful high-level competition involving many thousands of competitors around the world.

What are we to make of this?

Holman Jenkins referred pejoratively to a past practice that he called “[giving] a pass to wife beaters.” This might more precisely be called erring on the side of legal inaction when the assault involves married couples. In the old days, there was implicit recognition that the intimate familiarity between married couples created an inherent potential for frustration, discord and violence that, as again noted by Jenkins, simply did not exist between strangers. That does not mean that those were the good old days, because today we wince at casual references to wife beating that crop up in old movies, books and plays. Still, today’s pendulum has swung so far in the opposite direction that husbands and wives are legally treated as exact equivalents to strangers. Mrs. Rice’s decision to marry her husband after he knocked her cold and her plea in his behalf in court illustrates the absurdity of this state of affairs.

The “Solution” to Domestic Violence

Some problems are inherent in the human condition. Domestic violence is one of them. There is no “solution” to it. Its mitigation is not a top-down process administered by bureaucratic organizations like the NFL or through compulsory arbitration by the National Labor Relations Board. What little help can be provided by third parties must be offered on a voluntary basis by the private sector. The people best equipped to solve the problem must be in charge. That means the principals – the husband and wife.

The NFL should stay out of it.

DRI-257 for week of 9-14-14: McClatchy Series is a ‘Contract to Cheat’ Readers of the Truth

An Access Advertising EconBrief:

McClatchy Series is a ‘Contract to Cheat’ Readers of the Truth

A recurring theme in this space is the corrupt deterioration of journalism. This process began long before the rise of the Internet and ushered in the industry-wide decline in circulation that has now reached crisis stage. The decay is most evident in investigative journalism, which has abandoned factual research methods in favor of left-wing political advocacy.

The latest proof is supplied by the McClatchy chain’s three-part series entitled “Contract to Cheat,” which appeared in early September. McClatchy reporters spent a year reviewing transactions from construction projects commissioned by the federal government beginning in 2009 as part of the so-called “economic stimulus” program. According to the article appearing in the Kansas City Star, some of whose reporters contributed to the research, the stimulus was negated by dishonest behavior of contractors. This behavior consisted primarily of “misclassification” – the listing of workers as independent contractors rather than employees.

The Allegations

Contractors supposedly engaged in misclassification of workers for economic reasons. First, the misclassification allowed the contractors to avoid paying payroll tax on wages paid to workers. Second, it allowed them to pay lower wages by evading minimum-wage standards for wages paid on federally contracted work. Third, it allowed them to avoid paying workers compensation benefits to workers who were mis-classified as independent contractors. Fourth, it allowed them to avoid an increase in their unemployment “experience rating” when the workers were eventually laid off following completion of the work. Fifth, it facilitated the avoidance of income tax on the wages paid.

In the early installments of the series, stress was placed on losses suffered by taxpayers from contractor cheating. Although the article was long on indignant rhetoric and short on specifics, readers could draw the conclusion that those losses were due to the reduced collection of rightful payroll taxes, the lower level of wages on which taxes were levied and the outright avoidance of tax on income that was never reported. In later installments, greater stress was placed on losses suffered by workers in the form of lower wages received than were due according to statute for work performed, loss of future Social Security benefits from unpaid payroll taxes, loss of unemployment and workers’ compensation and the psychological detriment of insecurity.

A banner proclamation of the series was the claim that contractor cheating thwarted and blunted the effects of federal-government stimulus spending. Despite the headline status of this claim, it was merely asserted and never supported by either logic or evidence. The only economist quoted in the series, former Chairman of the Council of Economic Advisors’ Jared Bernstein, commented (briefly) only on the issue of misclassification and was silent on its interaction with the stimulus program.

In keeping with the contemporary modus operandi of investigative journalism, the series employed interviews, anecdotes and quotations from non-authoritative sources to achieve maximum emotive effect. Despite the reference to economic stimulus, economic theory and logic were nowhere employed or cited.

Needless to say, the lack of economics means that readers of the series were cheated of the truth. In effect, McClatchy operated under an implicit contract with the political Left. The outlines of that contract are clear to anybody with an elementary understanding of economic theory and logic.

John Keynes’ Body Lies A-Spinnin’ In His Grave

The first article in the series quotes President Obama’s grave declaration that the federal government was “the only entity with the resources to act” in the face of economic depression engulfing us in early 2009. This astonishing assertion, somehow swallowed at face value by a bewildered nation, is patently false. The federal government owns no resources other than the assets it commands. These consist mostly of large land holdings, mostly in the western U.S. It did not sell these lands to foreigners in order to finance the stimulus. So the President’s rationale for action was a lie.

The true rationale was the one cited by his economic advisors, who have consistently followed a Keynesian philosophy. John Maynard Keynes legitimized the practice of deficit spending by national governments as a corrective to recession and depression. He rationalized this by positing a chronic lack of effective demand, or spending, as the source of recession and unemployment. Government must increase the volume of total spending on output by increasing its own spending and inducing the private sector to spend more. It increases its own spending by spending more than it withdraws in tax receipts. It induces businesses to spend more by supplying more money (“liquidity”), lowering interest rates and inducing more investment spending. It induces consumers to spend more by reducing taxes, thus increasing their disposable incomes, whence their consumption spending derives. In addition, consumer spending will increase in response to government and investment spending increases due to the so-called “multiplier” effect of the resulting increases in income.

Obama administration economists – and their acolytes, such as Paul Krugman – have mouthed this party line with a straight face, despite the fact that it has been discredited for decades. Books have been written outlining its flaws. We might sum them up by saying that government must acquire the “resources” it commands, and this acquisition (more than) negates any stimulative effect that the spending itself generates. But the worship of spending itself remains sacred within the fraternity of Keynesian economists – which might better be termed a “coven.”

And that is why the McClatchy article is an eyebrow raiser. In so many words, the authors nonchalantly accuse cheating contractors of thwarting the stimulus. It is one thing to accuse them of breaking the law. That may or may not be true, but it is at least consistent with the allegations they make. But the McClatchy authors’ conclusion about the stimulus makes absolutely no sense even if we assume that their every allegation against contractors is the gospel truth.

First of all, consider the authors’ insistence that taxpayers were “cheated” by contractors. Even if we assume this to be true, that can’t have reduced the impact of the stimulus. Keynes himself advocated deficit spending; e.g., increasing government expenditures relative to tax receipts. One way to achieve that is by increasing government spending; another way is by reducing tax receipts. Every elementary macroeconomics textbook published from the 1940s to the 1980s acknowledged this. Contractor cheating, to the extent that it did occur, increased the impact of the stimulus. This applies equally to payroll-tax evasion by employers and income-tax evasion by workers. Indeed, we have been hearing for six years how important it supposedly is to get more income in the hands of those low-income workers who are ostensibly more avid to spend. Well, the series details how the cheating process did just that, by allowing them to evade taxes. That may have been illegal, but there is no doubt whatsoever that it was stimulative according to the dictates of Keynesian economics.

There is no contradiction here in saying simultaneously that behavior is illegal and economically desirable. The series “accuses” contractors of committing these illegal acts in order to lower their bids and beat out competitors for the government contracts. Again, this may be illegal, but it is exactly how the competitive market process works when prices are allowed to fluctuate in accordance with supply and demand and not artificially fixed by government. For years, economists have been complaining that artificially high wages mandated by government laws such as the Davis-Bacon Act were harming workers and consumers by restricting employment, incomes and output. Here is concrete proof – contractors and workers were willing and able to complete government contracts for lower wages than mandated by government. This means that there was money left over to spend on other bridges, dams and “shovel-ready” projects that would stimulate the economy. The so-called harm of the lower wages paid to the “cheating” workers was really a benefit in real economic terms because it allowed more goods and services to be produced using the same total stimulus money. That is exactly how free markets react to economic depression; lower wages stimulate more employment, production and real income. The authors unwittingly hint at this solution when they quote a worker defending his decision to work at a sub-minimum wage: “I was just happy to be working at all.” If producing more stuff with the same amount of money is supposed to be economically harmful, then we are living in Alice’s Wonderland, not reality. Even Keynesians know that more goods are preferable to fewer.

We know realize that the McClatchy series is an affront to general economic theory, not just the left-wing Keynesian theory. Every government mandate cited by the McClatchy authors – payroll taxes, income-tax withholding and the rest – contributes to the “wedge” driven between what the employer pays and what the employee receives. Traditionally, the left wing maintains that this tax burden is worth every penny because the services it finances are so valuable to workers. Paradoxically, the Left also maintains that the burden is trivial to employers and doesn’t discourage much work effort, despite the huge value it creates.

But now the McClatchy authors – apparently without even realizing it – provide empirical evidence that completely refutes the longstanding left-wing position on taxation and work effort. Employers and workers are so anxious to evade this tax burden that they actually break the law. This fully vindicates the longtime supply-side view that lower taxes will call forth more production and work effort. And then the McClatchy authors blithely assert that this is bad for the economy because…because…well, they don’t give a reason other than because it is against the law. Of course it’s against the law; the government has made economically beneficial competition unlawful.

When the Left violates the precepts of Keynes and free-market economics, you know it’s gone off the deep end.

And That’s Not All, Folks

Does this world-class stupidity exhaust the stock of errors committed in the McClatchy series? No. Nobody ever went broke underestimating the economic literacy of metropolitan newspaper staff. The second article in the series is occupied primarily with excoriating contractors, regulators, and politicians for failure to anticipate or correct the misclassification of workers.

Why doesn’t the IRS cross-check data to discover the tax evasion? Workers are assigned fake Social Security numbers. Why can’t workers be interviewed to uncover the falsehoods? They are given phone names and addresses. Everybody agrees that misclassification has been commonplace for many years. And everywhere the investigators went they encountered nonchalance, lethargy and lassitude rather than rage, disbelief and energetic action. Outrageous! Whoever heard of such a thing? Why, anybody would think that we are really governed by a massive, inefficient, insensitive bureaucracy. In fact, the authors quote one observer’s assessment that “you’ve got all these agencies, and this is their fiefdom. They don’t care what the other [agencies'] regulations are.”

Confronted with this massive regulatory ineptitude, what would an alert, inquisitive reporter say? The first thing that would occur to him or her would be this: If the stimulus program really depended for its effectiveness on the efficient operation of this apparatus, then the stimulus program was manifestly unwise and doomed to failure from the outset. (We are not even requiring our alert reporter to be economically knowledgeable, just minimally intelligent.)

The authors go to considerable trouble to document the ambiguity of the “independent contractor” definition, stressing that there is “no one definition” of the distinction between employee and contractor. But assuming this is true, why is it surprising that the law is so difficult to enforce? If so much supposedly rides on accurately classifying workers and the authors themselves find it difficult to explain how to do it, why are contractors villains for failing to accomplish it? Is it really contractors who are cheating us here? Or is it the government, by setting up this arbitrary distinction for its own convenience and then angrily making criminals out of ordinary people for failing to do what it is unable or unwilling to do?

The Tipoff 

The jaundiced view of McClatchy and its motives derives from decades-long experience with newspapers and reporters. It can be verified by consulting the ostensible triumphs of investigative journalism over the last 25 years, which are notable for their lack of factual accuracy and left-wing advocacy. It is on prominent display in the McClatchy series. The tipoff to the authors’ bias is their attitude toward the workers employed by the “cheating contractors.”

The contractors themselves are the villains, the cheaters of the titular “Contract to Cheat.” They are greedy, insensitive, opportunistic scofflaws. Every principal in the contracting process evinced the same attitude when interviewed by the authors. “What? Who? Me? I didn’t know…I didn’t realize…Nobody told me…It wasn’t my job…wasn’t my place.” But these protestations are treated with disdain when made by contractors, who the authors tacitly assume to be lying snakes.

What about the workers? Well, the closest thing to an assessment of blame levied on workers is the authors’ bland acknowledgment that workers “responsible for the reporting of their income to the IRS.”

No spit, Spurlock. According to law, and derelictions committed by employers don’t relieve workers of their legal responsibility. It is just as plausible to posit that employers acted in response to pressure from workers as it is to assume that employers cooked up a scheme to defraud the government.

But the authors treat workers as both dumb and innocent. That is, they tacitly assume that. If (say) a Republican legislator were to characterize America’s workers as too dumb to be responsible for their actions or too dumb to understand a simple employment relationship, he would be castigated and forced to resign. But that is the implicit position of the McClatchy authors. Illegality was rampant, nobody did their due diligence, the system failed completely and workers – well, workers were innocent bystanders who just stumbled into things by accident and did what they were told and never meant to hurt anybody or break any laws and – perdoname, senor; no hablo Ingles. (Yes, immigrants appear in the series as the obligatory exploited, downtrodden mass – acted upon, but not acting in their own behalf.)

McClatchy is an organ of the left wing. Union workers and low-income workers are a leading constituent class of the Democrat Party. They must be absolved of blame. That accounts for the wildly unbalanced portrait of the principal parties in “Contract to Cheat.” Of course, this stance is totally at variance with responsible journalism. And that is further proof that responsible journalism is virtually extinct in America today.

The Truth

The McClatchy series is indeed notable. It has uncovered useful and pertinent information. But the authors of the series have spun the information into a bizarre, distorted pattern that reflects their political (dis)orientation. Their economic illiteracy has produced a laughably inaccurate interpretation of their information, wrong no matter whose economic theory of stimulus one adopts. Their blindness to economic logic allows them to confuse illegality with inefficiency. Their left-wing bias demands that they ignore the obvious implications of the bureaucratic ineptitude and inefficiency they expose. And their pro-labor stance requires that they wash workers clean of all sin. In fact, rigid big government has strapped everybody into a regulatory straitjacket that offers a Hobson’s choice: obey the law and everybody loses or violate it and everybody gains. In that environment, everybody is a lawbreaker but the government is the morally guilty party.

There was indeed a “Contract to Cheat.” But the McClatchy authors were the contractors, bound by their political affiliation to their advocacy position, and their readers were the ones cheated of the truth.

DRI-296 for week of 9-7-14: Airlines Fleecing Consumers? No, Columnist Fleecing Readers

An Access Advertising EconBrief:

Airlines Fleecing Consumers? No, Columnist Fleecing Readers

These days Americans fight fiercely for the coveted status of “victim.” In bygone days, we were rebuked for our headlong pursuit of success at any cost. Today, we compete to construct the best excuse for failure.

The favored tactic in the pursuit of victim status is to blame some malign force that has it in for us. Since something large enough to constitute a malign force will usually take on an impersonal character, it is hard to assign a personal motive to its actions. Consequently, it is convenient to claim membership in a class of people similarly afflicted by the force.

Consumers are a leading victim class because they are numerous and because they deal with large, impersonal institutions. Businesses are often nominated as victimizers precisely because their relations with consumers are usually so impersonal.

A recent example turned up in the Washington Post, August 28, 2014, and was entitled: “Are Domestic Airlines Making Money By Fleecing Consumers?” The byline read “By Christopher Elliott, Columnist.” (Hereinafter, he will be referred to as CEC.)

Using the logic of economics, we will discover that CEC’s column stands as an excellent example of consumers being fleeced by a victimizer who exploits their weakness. But the victimizers are not the airlines, as the column contends. CEC himself is the victimizer. And the victims are not airline consumers. They are the readers of CEC’s column.

Columnist Feeds Readers’ Victimization Fantasies by Demonizing Airlines 

CEC begins with this icebreaker: “Why are airlines raking in record profits?” Not pausing to wait for response to this rhetorical question, he responds by speculating. “Maybe they’re monetizing your personal data… without your explicit consent.” Is he about to reveal an investigative scoop? No, he was apparently flinging some mud on the airlines as a cosmetic prep for his next accusation. “Then again, maybe it’s the fees. Airline add-ons, which cover ‘optional’ services for everything from reserving a seat to changing a ticket, used to be included in the cost of almost every fare. But over time, airlines began separating them from base fares. They sometimes neglected to mention that little detail, helping them earn more money but frustrating customers, critics say.”

Decades ago, a journalism school would have used this sort of opening as a primer on how not to write a story. The word “maybe” is a tipoff to the substitution of the reporter’s personal opinion for fact. In this case, the author is a columnist, not a reporter. Does that give him carte blanche to throw his opinions around as though they were nickels rather than hefting them thoughtfully as if they were manhole covers? At the very least, he should offer some supporting evidence for the speculation that the airlines are committing criminal breaches of privacy. Even columnists do not have the privilege of casually libeling their subjects.

CEC then calls upon the favorite weapon of today’s marauding journalist – the supporting anecdote. A man “didn’t know about the change fees when he booked [airline] tickets.” He had to postpone his trip, and complained that “they charged me $300 to change my tickets” while offering him a $142, limited-duration credit instead of a fee refund. His reaction was classic victim-speak: “Airfares are outlandish, fliers are charged for everything and comfort is a thing of the past? How can that be allowed [emphasis added]?”

 

There ought to be a law! Or so says CEC – and a yet-to-be-determined number of U.S. Senators, whose legislative chamber has launched an investigation of “airline fee transparency and passenger privacy.” The investigation will “determine whether current rules go far enough to protect consumers and, if not, whether new laws are needed.” CEC’s phrasing seems unduly circumspect here; if the Senate finds that current rules don’t go far enough, experience tells us that no power on Earth will stop them from passing more laws.

The Senate, led by Sen. John D. Rockefeller (D-W.Va.), “wants to know exactly how much airlines earn from checked baggage, advance seat selection fees, preferred-seat fees and trip insurance” – data that the Department of Transportation (DOT) now doesn’t obtain to this level of disaggregation. In other words, the Senate demands the kind of information publicly disgorged only by public utilities, even though the airlines have been federally deregulated for over 35 years. The Senate also demands to know the airlines’ privacy policies.

CEC predicts that the inquiry will find “that airlines profit from fees and peddling personal data,” which will probably lead to more “consumer-protection” laws. He is not the only prophet quoted in the column. A “data-privacy advocate” also lauds Congress for “expressing interest in…the absence of any federal law protecting air travelers’ privacy.” Where once newspaper stories reported what happened, we now have columns that predict what will and/or should happen.

CEC dutifully quotes a “spokeswoman for …a trade organization for major U.S. airlines.” She insists that “domestic air carriers are committed to ensuring that customers always know the price of their ticket before they buy” and that airlines are “pledged to protect their custom4ers’ privacy.”

“Charging customers for services they value and are willing to pay for – which is common…globally – has also enabled airlines to provide consumers the ultimate choice and control over what they purchase,” she concludes.

CEC contrasts his predicted (!) Senate bill with one passed by the House, the “so-called Airfare Transparency Act,” which “would allow airlines to disclose taxes and fees separately from the fares they quote. If signed into law, critics say, it would give airlines a license to make money by deceiving customers.” But wait – isn’t the whole premise of this column that airlines are already doing just that – “fleecing” their customers through disclosure? Indeed, in the very next paragraph, CEC predicts that the Senate inquiry will produce “a noisy battle between [those] that believe the airline industry should operate free of consumer regulations and those who think that America’s air carriers are shamelessly fleecing passengers.” Then why do airlines need a “license” to do what they are already doing now without one?

So far, we can see that CEC’s column id dedicated to demonizing the airlines by portraying them as victimizers and their customers as victims. CEC’s fact-free libel of the major airlines is bad enough. But the way it plays on the economic ignorance of readers to stoke their victimization fantasies is even more reprehensible. Economics will show that the airlines are not victimizers; that their customers are not victims; that CEC himself is victimizing his readers by exploiting their ignorance of economic logic.

The Economics of Airline Competition

Marxists have traditionally used the phrase “it is no accident that…” to denote the coincidence of events with the Marxian theory of history. In that same vein, we might say that it is no accident that CEC begins his anti-airline diatribe by excoriating airlines for their “record profits.” The current anti-business vogue relies on a pejorative theory of profit as its foundational argument. Since victimization implies that the victimizer gains at the victim’s expense, some highly visible measure of that gain is a handy tool for exponents to wield. In this view, profit is good for business; business is the evil victimizer and consumers are the helpless, passive victims. So, what is good for business must be bad for consumers. Since everybody is a consumer and there are a lot more consumers than business owners, this is a promising line of attack for left-wing journalists.

The modern welfare state is an inflationist environment. Big government cannot exist and thrive without large – and growing – rates of spending. This is the left-wing, welfare-state equivalent of economic growth, the difference being that economic growth is organic, healthy and sustainable while inflationist, welfare-state spending is inherently artificial, unhealthy and time-limited. There are only three ways for government to get money to spend: by taxing it away from citizens, borrowing it from abroad or creating it in one of various ways. Successively higher taxes will eventually spark peaceful or armed revolution; borrowing will eventually exhaust foreign sources; and money creation will destroy the value of money through inflation. (That value includes not merely the purchasing power of money but, even more important, the ability of the price system and interest rates to efficiently allocate the flow of goods and services now and in the future.)

Continuous inflation causes nominal monetary values to rise continuously. This means that profits appear continuously to be increasing even though their true economic value may not have risen. For example, the purchasing power of profits distributed as income to shareholders may not have risen and may even have fallen even though nominal profits have gone up.

But this doesn’t stop the press from solemnly reporting that a particular business or industry has earned “record profits” in this quarter or year. The oil industry is the favorite target, but this tactic is adaptable to any business. In this case, recent news reports celebrated the record quarterly profits earned by American Airlines, United Airlines and Southwest Airlines, thus giving apparent substance to CEC’s lead. Neither those news stories nor CEC’s column bothered to tell the rest of the story behind these “records.”

First, monetary values rise every year, so there is a tendency toward annual record-setting as long as demand is relatively stable. That means that nominal values should be adjusted for inflation using a price index before any records can be detected. If that were to happen, the whole incidence of business record-setting would change dramatically.

Second, the airline industry is a special case. Reading CEC and other left-wing pseudo-journalists would give you the impression that the major airlines are rolling in profits and that you could have become rich by owning their stock. Uh-uh. Ever since airline deregulation got off the ground in 1978, the industry has been a bloody competitive battleground in which survival, not profits, has been the overriding goal of most members. Exhibit A: That “record profit” hauled in by American Airlines last quarter was its first since emerging from bankruptcy recently. American just paid a dividend to shareholders for the first time since 1980. United Airlines is better off, but not by much. And these are the survivors in an airline industry that once included firms like TWA, US Air and Midwest Express. Still want to travel back in time and own airline stocks?

Airlines are among a class of industries, also including railroads and broadcast media, that share the common features of high fixed costs and low marginal costs. In order to produce any output at all, the business must incur heavy costs of initial investment and setup. Once the business is operational, its marginal cost of producing an incremental unit of output – marching one more passenger onboard, loading one more car with coal or sending out the incremental broadcast – is extremely low. That means that such businesses often must incur a high debt load but can remain in business for a long time while just covering incremental costs with prices that fall short of profitable levels. It is a recipe for fierce price competition, low profits and some firms eking out a bare existence.

Prior to deregulation, the major airlines were fat, dumb and happy under federal-government regulation led by the old Civil Aeronautics Board (CAB). The CAB cartelized the industry, setting fares that allowed everybody a nice profit while keeping prices so high that most people viewed air travel as a luxury. Airlines competed by painting their planes different colors and offering competing snacks and beverages – not by lowering their prices.

After deregulation, the CAB was replaced by the Federal Aviation Administration, which ended controls over pricing and entry. Airfares plummeted. The demand for air travel zoomed skyward. Of course, many airlines had a hard time making ends meet even with this increased demand. The price of oil underwent periodic upward spikes and each one claimed a casualty or two from the airline industry, where oil-intensive aviation gas is a key input. The high union wages enjoyed by employees of the old-line firms like TWA, United and American were a heavy chain around the necks of the business, while Southwest Airlines built a consistently profitable business model based on non-union labor, safety, superior service and efficient management.

People like CEC, and most of his online commenters, never tire of bad-mouthing airline deregulation. Yet the years after deregulation provided a laboratory comparison in real time between regulated and deregulated airlines. Deregulation applied specifically to airlines engaged in interstate commerce; e.g., most of them. But many states still supported an intrastate airline industry of planes that flew only routes within that single state. And the fares of those airlines remained sky high. It wasn’t unusual to find regulated intrastate airfares that were much higher than deregulated interstate airfares for routes traveling longer distances and in higher demand.

Of course, anybody who actually believes CEC can always buy airline stocks and sit back, waiting to get rich. When that doesn’t happen, though, the buyer should blame CEC, not this writer.

In the broader sense, CEC and the political Left are barking up the wrong tree in the wrong forest by demonizing profit. During the last 36 years of deregulation, the sole airline to always turn a profit has been Southwest. And the perennial choice among consumers as the most popular airline has also been Southwest. In a free-market system, profit serves at least two indispensable functions: it identifies the sectors where consumers want additional resources to go and it rewards those firms that best serve consumer wants. In these cases, it is consumers that are in the driver’s seat directing the direction and amount of profit and consumers that ultimately benefit from the goods and services that are produced profitably. Nobody can claim that Southwest Airlines was a monopolist or an oligopolist getting fat by sweating money out of the hides of their customers.

The Economics of Bundled Pricing

Economics says that CEC is the opportunist capitalizing on the ignorance of his customers, not the airlines. Review the comments of the dissatisfied customer quoted by CEC: “Airfares are outlandish, fliers are charged for everything and comfort is a thing of the past. How can that be allowed?”

Airfares are not outlandish but cheap compared to the fares prior to deregulation. That refers not only to airline deregulation but also to energy deregulation, which eventually allowed the technological advanced that drove up domestic supplies of oil and drove down oil prices.

The comment that “fliers are charged for everything” is an obvious reference to the fees referenced by CEC. But the comment itself is inane. Of course consumers are charged for everything – how could they not be? Who else is there to pay for the goods and services consumers receive?

Business firms are not charitable institutions. The economic purpose they serve is to produce things more efficiently than we can ourselves. The firms must place a value on all the goods and services they produce because all of them require the use of scarce resources. Only if consumers are willing to pay the costs of all the resources used in production can we conclude that production is efficient. And we can’t draw that conclusion unless all of those costs are reflected in the price consumers pay. Moreover, businesses can’t remain in business unless consumer payments cover all business costs.

Economic logic tells us that CEC’s disgruntled customer is living in a left-wing fool’s paradise, where goods and services are magically provided free. But wait – CEC himself told us the same thing in the second paragraph of his column, when he said “Airline add-ons, which cover ‘optional’ services for everything from reserving a seat to changing a ticket, used to be included in the cost of almost every fare.” Right! And before deregulation, those costs were inflated by everything from union featherbedding and administered wages to government-imposed high fares to frills that consumers cared little about.

So what in the world is all this complaining over fees? Are the complainants really, truly saying “before the fee imposition I paid $X and now I’m paying $X plus the cost of the fee, therefore I’m worse off by the amount of the fee”? But that can scarcely be right, can it? Otherwise, airlines would have the business equivalent of a perpetual motion machine; all they’d have to do is arbitrarily pick something else to charge the customer for in order to inflate the cost of the ticket. (Charge the passenger for putting up the jetway, for taking the boarding pass, for delivering the safety lecture, ad infinitum.) In the fool’s paradise, airlines can arbitrarily charge any price they want when there is no government regulation to protect consumers. But as we now realize, it is competition that protects consumers, not government regulation.

Today’s fee increases are not arbitrary price increases. Instead, airlines are partially unbundling the elements of the airline flight in order to earn more revenue by allowing some customers to pay less by excluding elements that they don’t find desirable. They do the same thing now with beverage service when they offer alcoholic beverages for a price. By serving alcohol only to those few people who want a drink badly enough to pay for it, they allow the rest of us to fly cheaper than would be the case if they had to add on the cost of alcohol to the ticket price. This is not a strategy for raising prices but a strategy for avoiding price increases.

 

Why should the airlines want to avoid price increases? Deregulation has proved that the overwhelming bulk of the American public wants to fly from point A to point B as cheaply as possible – period. But there is a minority of the public that is willing to pay for amenities in the air. Airlines desperate to survive in the Darwinian struggle of the fittest that is today’s airline business are now trying to serve both classes of customers by keeping base fares as low as possible while charging the minority fees for those amenities that can be separately priced.

The political Left may find this competitive desperation unseemly but the one thing airlines shouldn’t be accused of is victimizing their customers. Unfortunately, that is how CEC

and his ilk make their living – by trashing free markets and their practitioners and victimizing readers.

When CEC’s Grumpy Old Man grouses that “comfort is a thing of the past… how can that be allowed?” he is apparently unaware that he is the one allowing it and that calling for government intervention is asking for the government to substitute its arbitrary dictates for his freedom of choice.

The reason “comfort is a thing of the past” is that consumers value comfort less than the cost of providing it. But if CEC or Mr. Grumpy objects, all they have to do is start their own airline and sell it to the public by advertising its comfortable amenities. If there is really a market for a Plush Air or Lavish Skies, lenders will pony up the cash, just as numerous lenders have done over the last 36 years to finance one failed start-up airline after another.

Full Disclosure? 

Is there the hint of a scintilla of a point anywhere in CEC’s disgraceful flight of fancy? Well, his consumer advocate (Sally Greenberg of the National Consumers League) punctuates her own silly diatribe against airline profits with the point that “many of the fees are poorly disclosed.” This is the only shot worth taking against the airlines among all those fired by the hand-held missile launchers of contemporary journalism.

A passenger who wants to change a flight should know in advance that a fee is being charged for the change. “In advance” means at the time of original purchase. Formerly that sort of notification was handled mostly by travel agents. But the same Internet revolution that has lowered the cost of term insurance by decimating the ranks of insurance agents and lowered the brokerage cost of stock transactions by eviscerating the ranks of stockbrokers has also winnowed the ranks of travel agents. And it is not too hard to imagine the relevant notifications falling between the cracks of the system. But a consumer can only fall victim to that kind of informational glitch once before being put on notice. That is not exactly like having your net worth confiscated by an airline.

Free markets are not perfect. But they work vastly better than anything else mankind has yet devised. Eventually the free market will even catch up with people like CEC, whose consumers are really the ones being fleeced.

DRI-287 for week of 8-31-14: The Hollywood Blacklist as an Economic Phenomenon

An Access Advertising EconBrief:

The Hollywood Blacklist as an Economic Phenomenon

Very few people will ever develop an econometric model. Even fewer will use abstruse mathematics to formulate economic theory. A larger subset of the population is called upon to interpret the output of these economic tools, but this group is still microscopically small. To pinpoint the practical value of an economic education, we will have to look elsewhere.

Economics should enable us to understand the “blooming, buzzing confusion” of our daily life, to borrow the characterization of a 19th-century historian. Indeed, the great historical questions of yesterday should yield their mysteries to basic economic logic.

No economic exercise is as deeply satisfying as the parsing of a great historical dispute or debate using economics. When this exercise overturns the conventional thinking, it is one of life’s most exhilarating moments.

The famous Hollywood Blacklist is a ripe subject for this economic treatment.

The Blacklist as Portrayed by the Political Left

The stylized portrayal of the Blacklist by the political Left begins in the 1930s, when numerous actors, actresses, screenwriters and other rank-and-file motion-picture personnel were strongly attracted by the tenets of socialism and Communism. Indeed, for many Communism was the practical embodiment of socialism. This attraction led them to participate in rallies, join organizations and make contributions in kind and in cash to the socialist and Communist movements. Some even joined the Communist Party, but these were mere flirtations, more emotional than intellectual. Almost all of these Party memberships were short, transitory affairs that, however, would later come back to haunt the participant.

Even the biggest movie stars were contractual employees of the big movie studios. The operational heads of the studios, moguls like Louis B. Mayer of Metro Goldwyn Mayer, Darryl F. Zanuck of Twentieth Century Fox and Harry Cohn of Columbia Pictures, were fanatically dedicated to the profits returned by their movies. This led them to take an unseemly interest in the private lives of their actors and actresses, even to the point of influencing the stars’ marital, pre-marital and extra-marital pursuits. The moguls feared that unfavorable publicity about a star would destroy his or her box-office value.

After World War II, American attitudes toward the Soviet Union underwent a reversal. The public became inordinately fearful of Russia and of Communism. This wave of emotion was typical of a country that was governed by a chaotic, competitive spirit rather than by a tightly regulated bureaucracy run by left-wing intellectuals, or what the radical economist Thorstein Veblen had called a “Soviet of engineers.” The same spirit had made America society racist (anti-black, anti-immigrant) and sexist (anti-woman). Now it had become “anti-Communist,” which was the same thing as anti-intellectual, anti-democratic and fascist. After all, the Fascists and Communists had opposed each other in the Spanish Civil War prior to World War II, hadn’t they?

This inordinate fear was exploited by Senator Joseph McCarthy of Wisconsin, who used his government investigative committee as a tool to further his political career by pretending to expose Communists operating in government and virtually every other nook and cranny of institutional America. The Left originated the term “McCarthyism” and used it as shorthand for the Cold War anti-Communist mentality and all its representations.

The moguls were less interested in anti-Communism as a political project than for its financial implications on their industry. They feared that the public would associate the left-wing sympathies of their actors, actresses and screenwriters with Russian Communism. This potential linkage threatened studio profits.

Thus was born the Blacklist. The moguls commissioned their sycophantic underlings and outside organizations, such as the newsletter Red Channels, to provide lists of Hollywood artists who were current or former Communist Party members. Those on the list were blacklisted – they could no longer work. The lists were compiled partly by offering an inducement: Those “naming names” of other current or former Party members would be spared punishment. The question “Are you now or have you ever been a member of the Communist Party?” became associated with the House Committee on Un-American Activities and McCarthyism in general.

The Left saw the dilemma faced by witnesses testifying before security hearings as a Catch 22. A witness admitting current or former Communist Party membership would subsequently be blacklisted. A witness refusing to “inform” on his friends and/or colleagues would also be blacklisted. A witness citing his or her Fifth Amendment right against self-incrimination as justification for a refusal to testify would be blacklisted. But a witness who testified and named names could work only at the cost of eternal damnation – by universal understanding, the most despised and despicable of all human beings is an Informer.

Thus, the Blacklist is pictured as an intellectual Dark Age, a dark night of the American soul. Some blacklistees (John Garfield, J. Edward Bromberg) were so traumatized by their plight that they died from the stress. Others (Larry Parks) suffered permanent destruction of their careers. Most (Lee Grant, Dalton Trumbo, Carl Foreman, Marsha Hunt, Michael Wilson, Jules Dassin) lived in literal or figurative exile for one or two decades, suffering financial reverses and emotional isolation. A few (Edward G. Robinson) coped with a quasi-blacklist (“greylist”) that produced similar but less severe effects.

The Blacklist hovered like a great plague over the land for many years until it finally ended suddenly in the early 1960s. The heroic Kirk Douglas (or, in some retellings, the heroic Otto Preminger) openly hired long-blacklisted screenwriter Dalton Trumbo, thus breaking the back of the Blacklist.

The Blacklist as Seen Through the Lens of Economics

If the left-wing tale of the Blacklist has a fairy-tale quality, that is apt. Despite the acceptance and even reverence with which it is treated, it makes little sense. The principals behave in unreal ways, unlike actual human beings impelled by rational motives. The portions of the story that are correct are woefully incomplete. The rest is inaccurate. Most misleading of all is the complete absence of economic logic from the tale.

America’s “inordinate” fear of Communism. To be sure, fear is a prime mover of human action. But fear is conditioned and shaped by our rational understanding of the world around us. After World War II, the Soviet Union’s public face was rapidly transformed. Russia blockaded Berlin. It invaded or formally occupied Eastern Europe. After a few years, it acquired nuclear weapons that it pointed at the U.S. It aided its client states in the export of Communism throughout the world and indirectly fought the U.S. by aiding North Korea against South Korea. Eventually, the confluence of all these actions resulted in the term “Cold War.”

We know now what we strongly suspected then – that the Soviet Union had unleashed the worst campaign of mass murder in human history during the 20th-century’s first half. Joseph Stalin supervised the killing of more Jews than did Adolf Hitler and killed more of his own citizens than did the Nazis in wartime. We also know that the America Communist Party was the Soviet espionage apparatus in the U.S.

Given all this, the fear of Soviet Russia does not seem “inordinate.” Moreover, the actions of the Communist Chinese subsequent to the fall of Nationalist China in the late 1940s validate the fear of Communism generally. Red China did not export terror and death to the extent that Soviet Russia did. But their murderous reign within China itself surpassed even Stalin’s butchery.

In this light, the American reaction against Communism seems mild and tentative. And indeed we know that prior to the accession of Ronald Reagan to the Presidency in 1980, the Cold War was all but lost. While the American public displayed a well-founded a prophetic fear of Communism, our intellectual elites showed a shocking indifference to it. This began with the attempts by the Truman administration to cover up the discovery of high-level Communist penetration of the U.S. State Department and continued with the friendliness shown to Communist dictators by the American intelligentsia and to Marxist ideology by the American academy. Marxist economics has long exceeded free-market economics in popularity at American universities. Mainstream economics textbooks, notably the best-selling Economics by Nobel laureate Paul Samuelson, touted the superiority of Communist central planning to American free markets in promoting economic growth right up to the day when the Soviet Union collapsed.

Time after time, the American public’s fear of Communism was validated while the American elites’ acceptance of it was not.

The Moguls and the Blacklist. The Left portrays the Hollywood Moguls as craven cowards because they were profit-motivated. Of course, when those same moguls occasionally dabbled in politics without a business rationale, the Left excoriated them for that as well. This leads us to suspect that the Left simply approved of the Communist sympathies of the blacklistees.

Left-wing intellectuals criticized corporations in the 1930s for putting the interests of executives ahead of shareholder and consumer interests. Yet here the moguls are criticized for doing just the opposite. Using the Left’s own premise – but applying it within the model of economic logic – the moguls were safeguarding the interests of consumers and shareholders when they instituted the Blacklist.

The movie moguls developed – or, more accurately, stumbled upon – the “star system” of moviemaking as a way of stimulating movie attendance by focusing their attention on movie stars. This system worked so well that in the 1930s and 40s, average weekly movie-theater attendance approached the population of the entire country. (Today it languishes at 10-15% of U.S. population.) The leading actors and actresses may have been salaried employees, but they were the best-paid people in the nation – behind only the moguls themselves.

The appeal of the stars rested on the image they projected. Of course, audiences knew that Clark Gable was not really a reporter or a British naval officer and Errol Flynn was not really a pirate or a medieval aristocrat-turned-rebel-bandit. But they believed that the roles were extensions of the stars’ true personalities – Gable’s as a straightforward, aggressive male and Flynn’s as an irresistible cavalier. Ditto for Gary Cooper as a man of few words and James Stewart as hesitant and bashful.

In order to keep their profit machine humming, the moguls inserted morals clauses in studio contracts allowing termination for “moral turpitude” or anything that would destroy the good will vested in those personalities. From the standpoint of consumers – and therefore from the standpoint of shareholders and the moguls as well – a movie star was a product consisting not wholly but largely of image. A mogul that ignored the image projected by a star would have been derelict in professional duty.

Communism was a label that threatened a studio’s brand just as (for example) genetic modifications affect the brand of certain foods today. The comparison is apt. Communism was a genuine threat, regardless of whether or not any actor or actress really ever espoused Communist doctrine. Genetic modification, on the other hand, is a bogeyman whose dangers are illusory. But in both cases, the relevant consideration was and is what consumers think rather than objective truth. Consumer beliefs, truth aside, will govern their actions and the marketplace outcome. Consequently, moguls must act on their perception of what consumers perceive.

The moguls accurately judged that any actor or actress linked to Communism would be box-office poison, as would any writer whose words were being spoken on screen. Therefore, they had to purge their industry of Communists and suspected Communists – and do so in the most visible way possible. After all, any executive could, and presumably would, say that there were no Communists working for him. But the Blacklist was an exercise in product labeling – just the sort of thing that the political Left likes and even demands from corporations. The moguls were trying to obtain independent certification that their motion-picture product was “Communist -free.” Audiences could safely admire the actors and actresses appearing in it; they could safely consume the spoken and visual content contained within it. If the moguls had been selling apples, the Left would surely have admired the energy and determination devoted to preserving the purity and wholesomeness of the product.

But since we were talking movies, the Left was outraged by the Blacklist.

The Blacklist helped usher in an undemocratic reign of terror in America. Nothing prevented the dozens of competing movie studios and independent movie producers from advertising their movies by saying “we employ Communists and former Communists” or “we cast Fifth-Amendment-takers in our productions.” If the public was indifferent to this or even pleased by the idea, they could have flocked to these competing movies and enriched the maverick studios and producers. Of course, that didn’t happen because the public held no such beliefs. The moguls were neither craven cowards nor undemocratic tyrants. They were doing exactly what producers are supposed to do in a free market and what the Left criticizes producers for not doing: catering to consumers by insuring the quality of their product, thereby catering to shareholders by safeguarding profits.

The Blacklist was undemocratic and unfair because it denied blacklistees the means of earning a living. This is completely untrue. At worst, blacklistees were denied the ability to work in Hollywood productions. That is, they were denied the same thing that actors and actresses are denied when they are not cast and writers are denied when their scripts are rejected – which is the fate of the overwhelming majority of all actors, actresses and writers. In this case, the denial was figuratively stamped “unsuitable due to Communism.” This was a subjective evaluation, just as all rejections are subjective. Of course, the particular artist involved will take the blow hard and view it as unfair – just as all rejects do when consumers prefer the work of somebody else.

At all events, the so-called “victims” of the Blacklist were not denied the “right to work.” Movie actors went abroad and worked. Michael Wilson and Dalton Trumbo wrote Oscar-winning scripts submitted under false names while working and earning income abroad. Other blacklistees worked on Broadway or on television. And of course, nothing prevented them from – hold on to your seats here – getting an ordinary job and earning an ordinary living instead of earning thousands of dollars per week in Hollywood while the average American wage was less than five thousand dollars per year. Indeed, from among the few hundred documented Blacklist cases, it is often difficult to sort out those people whose Hollywood careers were ended by the Blacklist from those whose careers petered out naturally. In Hollywood as in professional sports, the average career is short though often sweet.

Among the victims of the so-called “greylist,” Edward G. Robinson made 13 movies during the short time period when he was allegedly greylisted. All but one of these was for American studios, mostly major ones. Of course, his roles were not necessarily plum ones, but that was certainly because his career was declining both before and after the Blacklist. For those whose career proved disappointing, claiming victimization by the Blacklist has provided compensation for the recognition fate denied them and an excuse for failing to justify their own expectations of success.

The Blacklist was evil because McCarthyism itself was evil and threatened America with dictatorship. We have shown that, far from being evil, the Blacklist was a product of free-market economics at work. The Left excoriates free-market economics when it fails – or supposedly fails – then turns around and excoriates it for succeeding while correcting its supposed errors. But even more ridiculous is the fact that the Hollywood Blacklist – today almost always linked with McCarthy and McCarthyism even by those caught in its toils – had nothing whatever to do with Joe McCarthy.

Senator Joseph McCarthy was elected to the Senate from Wisconsin in 1946. But he was virtually unknown to most of America until he made a speech in Wheeling, West Virginia in 1950. The speech concerned Communists that McCarthy alleged to reside in the U.S. State Department, not in Hollywood. And throughout McCarthy’s subsequent career, Communists in Hollywood were not an issue raised by McCarthy. McCarthy’s Senate Committee was Government Operations, not too surprisingly in view of his preoccupation with Communists in government. The government committee most often concerned with Communists in Hollywood was not even in the Senate – it was the notorious House Committee on Un-American Activities (HUAC).

Hollywood Communism made national headlines in 1947 when the so-called Hollywood Ten were called to testify before HUAC. These were a group of actors, writers and directors who were known to be current or former members of the Communist Party. They included now-famous names like writer Dalton Trumbo and director Edward Dmytryk. In his memoir Odd Man Out, Dmytryk confirms that all of the Hollywood Ten were indeed current or former Party members. He recounts how the appearance of the Ten before Congress was orchestrated by the Party and how non-Communist Hollywood liberals like Humphrey Bogart, Lauren Bacall, Gene Kelly and Danny Kaye were duped into supporting the Ten. The Party line was that the Ten were exercising their First Amendment rights of free speech and free association. After all, Communist Party membership was legal.

But when the hearings began, Dmytryk was astonished to find that the Ten uniformly pleaded their Fifth Amendment right against self-incrimination to avoid answering questions and having to name names. Their testimony consisted of diatribes against the Committee in a Communist-Party vein. This episode reinforced Dmytryk’s resolve to quit the Party and sever his ties with his Leftist colleagues. His refusal to name names led to a prison sentence for contempt of Congress, after which Dmytryk emerged one year later to testify again and salvage his career by naming the names of his Party colleagues.

In 1947, McCarthy sat in Congress but was uninvolved in the Hollywood Ten episode. He played no part in the Hollywood Blacklist. By the time McCarthy delivered his Wheeling speech, the Blacklist had already been established. McCarthy played no part in it; he was concerned with security risks in government (the State Department) and the military (the Army). McCarthyism, whatever it was or meant, was a phenomenon of the 1950s, while the Blacklist was the outgrowth of the Cold War security debates that began in the 1940s.

McCarthy is notorious today for claiming that large numbers of Communists were employed in government without naming any names. (“He never produced a single Communist.”) As is usually the case, the Left is wrong. McCarthy did name names and was usually right about those he named, such as Owen Lattimore. He also named numbers, but the numbers did not refer to those currently employed but rather Communists known to have operated within government. We know now that substantial numbers of Communist agents operated within the State Department, for example, and the exact number is not of paramount importance today because we are still uncovering more. All this is irrelevant to the Hollywood Blacklist.

The Blacklist was evil because (a) the blacklistees were never Communists (b )the blacklistees had every right to be Communists and still remain employed in Hollywood (c)anti-Communism was evil by definition (d) choose any one or all of the above. Perhaps the most amazing facet of the Left’s portrayal is its fuzziness. When discussing blacklistees like Larry Parks, the Left implies that all blacklistees were innocent victims who were selected at random by Red Channels or victimized by John Wayne, Ward Bond or an anonymous grudgeholder. It is true that fellow actors at the Motion Picture Alliance, including stars like Wayne, were involved in the interviews prepatory to blacklisting. By blacklisting a fellow performer, MPA officials might leave themselves open to a charge of thinning the ranks of their competition. But every blacklistee was a potential employee of the studio; this was the opportunity cost incurred by the moguls. They had no incentive to be randomly vicious or inaccurate, since they were cutting their own throats by doing so – and the object of the exercise was to preserve their profits, not squander them. Presumably, this is why prospective blacklistees were always given an out, either by naming names or by pleading innocence with sufficient eloquence. This latter course was taken by various stars, including Lucille Ball and James Cagney.

The Left has gotten a lot of mileage out of the implication that the blacklistees were all, or mostly, innocent. But the problem is that this does not imply that the investigations of Communist infiltration of Hollywood were wrong; it implies that there was not enough investigation. Even if the moguls had done nothing, if Red Channels and the MPAA had never existed, the American public’s well-founded fear of Communism would have remained. The investigations did not convict innocent people of being Communists; they gave people under suspicion the opportunity to absolve themselves. Those who seized the opportunity – e.g., most people involved – emerged better for the process.

When the subject changes to avowed Communists like Dalton Trumbo, the Left abruptly changes its tune to focus on the unfairness of denying the writer his right to write, to earn income, support his family, etc. But what the Left is defending is not a right but rather Trumbo’s power to force people to hire him when his qualifications for hire no longer pass muster. While Trumbo would have protested that he was still the same writer he always was, the truth was that his qualifications did not consist solely of his writing talent. He also had to be free of moral taint. Would the Left defend O. J. Simpson’s “right” to work as an actor today even after a civil conviction for murder? Would they have defended Lord Haw Haw’s right to remain employed as an announcer after he worked for the Nazis in World War II?

Indeed, suppose the word “Communist” in the entire Blacklist controversy were to be replaced by the word “Nazi” – would the Left still take the same anti-blacklist position? Of course, we all know that the answer to that question is “no.” Right-wing writers like Ayn Rand and Morrie Ryskind were subjected to the Left’s own Blacklist after they objected to the Communist penetration of Hollywood. In the ensuing years, nobody on the Left has come to their defense.

The Blacklist killed blacklistees. The few blacklistees who died, including John Garfield and J. Edward Bromberg, had pre-existing medical conditions. (Garfield’s heart condition exempted him from military service in World War II.) Medical science lacks the capability of assigning causation to an external event like the Blacklist, which is one of many potential stressful events that might or might not contribute to death.

The overarching question, though, is why any moral opprobrium should attach to the Blacklist. The moguls had no incentive to kill Garfield or Bromberg. If nobody intended to cause the deaths, then the Blacklist is like any other stressful event. All kinds of morally innocuous actions might conceivably result in a death without adversely transforming the character of the action.

The Blacklist was an anti-competitive cartel. Intriguingly, this argument was advanced not by the Left but by free-market economist Milton Friedman in his book Capitalism and Freedom. Its problem is that it fails to distinguish between actions taken simultaneously and those taken in concert. To use the O.J. Simpson case again, it is obvious that Simpson became unemployable the moment he killed Nicole Simpson. Hollywood moguls did not need to collude to achieve that outcome. The same is true of the Hollywood Blacklist. If simultaneous actions taken to insure product quality are “collusion,” then the word has been distorted beyond all semblance of meaning.

The Blacklist was not destroyed by the heroic actions of Kirk Douglas or Otto Preminger in hiring Dalton Trumbo (to write Spartacus or Exodus, respectively). The Blacklist was already a dead letter by 1960, then these movies were produced. It was killed by the death of anti-Communism, which died when Joe McCarthy was discredited during the Army-McCarthy hearings in 1956. If Douglas or Preminger had hired Trumbo in 1953, that would have been courageous. But they didn’t because – at that point – it would also have been suicidal.

Forcing witnesses to inform to keep their jobs is immoral. The injunction against informing is the heart of the criminal code. (It is even the title of a cult-movie classic from 1931, Howard Hawks’ The Criminal Code.) Without informing, police would be unable to solve most criminal cases; even with the sophisticated technology aired on television shows like CSI, the solution of most crimes depends on confession and prying information out of witnesses. The technique of threatening knowledgeable parties with sanctions in order to induce testimony is perhaps the most venerable – and successful – of all police techniques.

The position taken by the Left aligns it perfectly with the criminal element, which tries to preserve collusion between criminals against the substantial inducements for confession. It is those economic incentives that persuaded Dmytryk and others, such as director Elia Kazan and actor Lee J. Cobb, to relent and name names.

It is unfair that people should be held accountable for past actions that led to unforeseeable consequences such as blacklisting. When people publish embarrassing photos or posts about themselves on the Internet, they give hostages to fortune. Yet the prevailing sentiment today seems to be that they should have known better. If anybody should have known better, it was Hollywood actors with morals clauses in their lucrative contracts. Communism was both controversial and popular in the 1920s and 30s. During World War I, the “Palmer Raids” had set a precedent for government interference with the exercise of a right to practice Communism. Yet an illusion of invulnerability and messianic notions of social responsibility persuaded countless Hollywood figures that their moral duty lay in following the red star of Communism.

If people choose to offer sympathy for former Communists, that is their business. Most of the original editors of the conservative magazine National Review were former Communists. They rebuilt their lives despite this youthful misstep by forcefully changing direction and repudiating their past. That is exactly what too many Hollywood Communists were unwilling to doand that is why we owe them no sympathy, just as we owe their arguments no respect.