DRI-312 for week of 8-18-13: Understanding Risk, Benefit and Safety

An Access Advertising EconBrief:

Understanding Risk, Benefit and Safety

The mainstream press has propagated an informal historical narrative of safety in America. Prior to the Progressive era and the advent of muckraking journalism, the public lay at the mercy of rapacious businessmen who knowingly produced unsafe products and unwholesome foods in order to maximize their personal wealth. Thanks to the unselfish labors of investigating journalists and the subsequent creation of government regulatory agencies, products and foods became safe for the first time.

Now regulators and the press fight a never-ending battle for safety against the forces of greedy capitalism. Alas, there are so many industries and goods to regulate and so little time and money in the federal budget with which to do it.

In order to appreciate the full falsity of this doctrine, we must grasp the economic meaning of concepts like risk, benefit and safety. A good route to this goal lies through our own inner sense of the logic of human behavior.

A Reductio Ad Absurdum

To highlight the concepts of risk, benefit and safety, consider the following example. It is a reductio ad absurdum – an example “reduced to absurdity” in order to eliminate extraneous considerations and shine a spotlight on a few insights.

Assume you have only one day left to live – exactly twenty-four hours. You are aware of this. You are also aware that your death will be instantaneous and painless and your vitality, faculties and awareness will remain unimpaired up to your last second of consciousness. How will this affect your behavior?

A little thought should convince you that the effect will be profound. You have only one day left to wring whatever excitement, enjoyment and satisfaction you can from life. Will that day be business as usual, awakening at the normal time and departing to work at your job? Unless you work at one of the world’s most stimulating and fulfilling jobs, the last thing you will want is to spend your final day on Earth at work.

Instead, you will devote your time to the most intense and meaningful pleasures. These may be physical or mental, aesthetic or gastronomic, boisterous or sedate. The word “pleasure” inevitably evokes the notion of hedonism in some people, but this need not apply here. The pleasures you seek during your last day may be sensual but they may just as easily be as cerebral as reading a book or as contemplative as observing a sunset. Your personal selections from the vast menu of choice will be highly subjective, in the sense that my choices might very well differ drastically from yours. In spite of this, though, the example affords highly useful insights about economics – particularly the concepts of risk, benefit and safety.

Economic Benefit

The first conclusion to emerge from our artificial but enlightening example relates to the nature of economic benefit. In recent decades, a Martian studying Earth by scanning its news media transmissions and publications might well conclude that the benefit of human existence derives from work. After all, politicians and commentators yammer endlessly about the glories of, and necessity for, “jobs, jobs, jobs.” Taking this preoccupation at face value implies that work, in and of itself, is what makes life worthwhile. The obiter dicta of the rich and famous, who recklessly profess such heartfelt love for their profession that they would practice it for nothing, reinforce this impression.

Our example, though, shatters this shibboleth. Economic value inheres not in work but rather in the things that work produces, which produce pleasure and satisfaction when consumed. It is certainly possible to love one’s work, but it is not coincidence that the people who love it the most are the ones most highly compensated for it; their earnings can purchase the most satisfaction and pleasure. It is a famous truism that nobody’s deathbed reflections are mostly regrets at not spending more time at the office.

Risk

Ever since the pathbreaking work of economist Frank Knight some ninety years ago, economists have defined risk as mathematically expressed variance of possible future outcomes. Uncertainty, the first cousin of risk, applies when the future outcomes vary in ways not susceptible to mathematical expression. For our purposes, however, we will view risk colloquially, as the possibility of unfavorable future outcomes.

Again, it should be obvious that the prospect of death in twenty-four hours’ time will radically affect your attitude toward risk and benefit. You are out to grab all the gusto you can get in the day you have left. From experience, we realize that the pursuit of pleasure can involve some element of risk. For example, the most hair-raising rollercoaster ride may well provoke the most pleasurable response. But it may also produce nausea, vertigo and unsteadiness. There is even the risk of injury or death if the mechanism malfunctions or you somehow are thrown from the ride.

If you are the kind of person who enjoys rollercoasters, you will be undeterred by their risk in our special case. You are certainly not going to pass up this big thrill for fear of a one-in-a-hundred-million chance of death – you’re going to be dead tomorrow anyway! On the other hand, you might well refuse to ride the coaster with your safety belt unbuckled for the first twenty-three hours of your last day. You don’t want to take foolish risks and waste most of your last day. But you might well reverse that decision during your final hour, especially if you always wondered what it would be like to take that ride unbuckled. You certainly aren’t risking much for that thrill, are you, with only minutes left to live?

Safety

Safety is best understood as reduction in risk or uncertainty. In colloquial terms, it is time and trouble taken to reduce the likelihood of unfavorable outcomes. Put in those terms, the equivocal nature of safety is clear. It demands the sacrifice of time – and time is just what you have so little left of. Why should you take much trouble reducing the likelihood of an unfavorable outcome when you will experience the most unfavorable outcome of all within twenty-four hours? Every second of time you spend on safety reduces the time you could be spending experiencing pleasure; every bit of trouble you take avoiding risk lowers your potential for happiness during the dwindling time you have left.

Now is the time for you to go hang gliding, even launching off a mountain top if the idea takes your fancy. Bungee jumping is another good candidate. In neither case will you spend an hour or two inspecting your equipment for defects or weakness.

Of course, we know that safety is a significant concern for all of us in our daily lives. That is one of the changes introduced by the reversion to reality in our model. Comparing reality to the polar extreme of our reductio ad absurdum outlines the continuum of risk, benefit and safety.

The Reality of Risk, Benefit and Safety

Reality differs from our artificial example in key respects. Although a relative few of us actually do have only twenty-four hours to live, only a tiny few of that few know (or suspect) the truth. And of those, virtually none have the freedom and vitality accorded our example individual. That clearly affects the central conclusions reached by our model – that the individual would seek out pleasure, eschew work, embrace risk if doing so heightened pleasure significantly and “purchase” little safety at the cost of foregoing pleasure.

We observe, and instinctively realize, that most people must work in order to earn income with which to buy pleasurable consumption goods. They tend to be “risk-averse” within relevant ranges of income and wealth; that is, they will buy a lottery ticket but not play roulette with the rent money. They value safety, but nowhere nearly to the extent implied by the mainstream news media and politicians. In a world of work and production, safety is produced using time and physical resources, which reduces the value of pleasurable goods produced because that time and those resources cannot then be used to produce pleasure. Thus, safety production adds to the money cost and price of consumption goods, which creates a tradeoff between safety and purchasing power. Nobel Laureate George Stigler once colorfully averred that he would rather crash once every 500,000 takeoffs than pay a fortune to fly between major U.S. cities.

In other words, the insights gained from our reductio ad absurdum turn out to be surprisingly useful. We merely have to adjust for the length, variability and unpredictability of actual life spans in order to predict the general character of human behavior in the face of risk. And when we apply these adjustments retroactively, we appreciate how badly astray the mainstream historical view of safety has led us.

Rewriting (Pseudo) History

The mainstream view contains at least a grain of truth in its suggestion that the emphasis on safety is a modern development. But the blame attached to profit-hungry capitalists is wrongheaded. This is not because capitalists aren’t profit-hungry; they most certainly are. But the hunger for profits has always been strong even as the production and consumption of safety have varied. Profit-hunger did not suppress safety for centuries, could not prevent the demand for safety from arising and cannot put it back into the bottle now that it has emerged.

The industrial revolution and the rise of free markets created a tremendous increase in human productivity, thereby increasing real incomes throughout the world. The increases were not uniform; certain countries benefitted much more, and faster, than others. The higher incomes increased the demand for safety and for medical research, which in turn led to tremendous gains in life expectancy.

Longer life spans increased the demand for safety even more. This is our reductio ad absurdum played out in reverse. The longer we expect to live, the more future value we are safeguarding by sacrificing present pleasure with our “purchases” of safety. Prior to the 20th century, with life expectancies at birth not much over 50 years even in the developed industrial nations, it didn’t pay to make great sacrifices in current consumption to safeguard the safety of many people whose longevity was limited anyway. But as life expectancy steadily lengthened – particularly for those in the later stages of life – the terms of the tradeoff changed dramatically.

Risk Compensation

Another factor that greatly affects the balance between risk and safety also emerged in our artificial example. We noted that many of the pleasure-producing human activities carry risk along with their beneficial properties; indeed, therisk itself may even be the source of pleasure. This is true of a wide range of human pursuits, ranging from the rollercoaster rise in our model to auto racing, casino gambling and bungee jumping. Some pastimes such as mountain climbing and hang gliding may produce secondary benefits like physical fitness to supplement their primary purpose of slaking a thirst for risk.

Mainstream society has traditionally viewed risky activities ambivalently. It has tolerated some (mountain-climbing) and frowned on others (gambling, illicit drug-taking) without acknowledging the bedrock similarity common to all. That failure has not only caused much needless death and suffering but has also endangered our freedoms.

Strongly influenced by mid-century muckraker Ralph Nader’s research on the Chevrolet Corvair (later discredited), the U.S. Congress passed legislation beginning in the 1960s requiring American automakers to include safety equipment on all vehicles as standard equipment rather than optional extras. Those safety features included safety belts and, eventually, crash bags. Starting in 1975, University of Chicago economist Sam Peltzman published studies of the results of this legislation. His work showed that any lives that might have been saved among occupants of vehicles tended to be offset by lives lost among pedestrians, cyclists and other non-occupants. That was not to deny the existence of a trend toward fewer highway vehicle deaths. Indeed, that trend had been underway well before the safety legislation was passed owing to factors such as improvements in vehicle design, production and maintenance. Sorting out the effects of this trend from those of the legislation required considerable statistical effort, not to say guesswork.

But the existence of a countervailing force was clear. Peltzman suggested that the safety devices made people feel safer, causing them to drive less carefully. This might be due to increased carelessness or a willingness to embrace a certain level of risk when driving, which caused them to compensate for their increasedlevel of personal protection by taking additional driving risks.

Politicians, regulators and do-gooders of all sorts went ballistic when confronted with Peltzman’s conclusions. How dare he suggest that federal-government safety legislation was anything less than a shining example of nobility and good intentions at work? Rather than ponder the implications of his analysis, they hardened their position. Not only did they force businesses to produce safety, they began forcing consumers to consume safety as well. This campaign began with mandatory seat-belt legislation requiring first drivers, then passengers and eventually children to wear seat belts while vehicles were in operation.

Essentially, the implications of the regulatory position were that markets are dysfunctional. In a competitive market, producers not only produce automobiles that provide transportation services, they also provide various complementary features for those autos. One of those features is safety. (In fact, virtually every safety feature was offered by private auto companies before it was required by the government.) Consumers can patronize auto companies and models that provide the most and best safety features, such as seat belts, air bags, anti-lock brakes and more. They can also reject those that omit safety features. Or consumers can choose to reject safety features by buying autos that lack them. Why would they do that? The obvious reason is that safety features require physical resources and engineering talent to provide, making them costly. Consumers may not wish to pay the cost.

By overriding producer decisions and consumer preferences, regulators in effect assert that markets do not work and government commands should replace the voluntary choices made in the marketplace. One obvious problem with this approach is that it creates momentum in the direction of a centrally planned, totalitarian economy and away from a voluntary, free-market one. But for those who believe that the end justifies the means, the loss of freedom may be justified by the greater safety resulting from the regulatory command-and-control approach.

As time went on, however, it became clear that the regulatory approach was not achieving the results claimed for it. Not only were markets being circumvented, but the regulatory nirvana of a risk-free world was no closer to reality. How could this be? What was going wrong?

As far back as 1908, the British equivalent of America’s Auto club urged landowners to cut back their hedges to improve visibility for drivers of the newly invented automobile. A retired Army colonel responded to this appeal by noting that this hedge-trimming had caused unintended consequences: his lawn had been filled with dust caused by zooming motorists who exceeded speed limits and skidded into his yard. When detained by police, the offenders maintained that “it was perfectly safe” to drive so fast because visibility was clear for a long distance. So the colonel changed his mind and let his shrubs grow in order to deter the speeders.

Following Sam Peltzman’s lead, researchers in succeeding decades discovered a myriad of analogous phenomena. The proliferation of wilderness- and mountain-rescue teams induced hikers and climbers to take more and bigger risks, thus assuring that deaths and injuries from hiking and climbing would not decline despite the increase in resources devoted to rescue. Parachute manufacturers built superior rip cords, but chutists pulled the rip cord later because they were more confident of the cord’s resilience. The result was stability of death rates for sky divers. Stronger levees did not reduce the incidence of death, injury and damage from floods because people were induced to remain in floodplain areas rather than move out. Indeed, the desirability of these locations meant that more people moved in when they became more safe, leading to even more deaths, injuries and damage when a flood did occur. Workers who began wearing back supports still suffered injuries from lifting because the safety supports encouraged them to life heavier loads – which overcame the effect of the supports. Research on children who began wearing more protective sports equipment consistently showed that the kids responded by playing more roughly, overriding the benefits of the equipment and continuing the trend toward injuries. Better contraceptives and more effective medical treatments for HIV infection encouraged people to engage in riskier sexual practices, thereby preventing infection rates from declining as much as expected.

The technical term for all these cases is risk compensation. The general public and those with vested interests in government regulation tend to scoff at the concept, but its presence has been confirmed so repeatedly that it is now conventional wisdom. According to the popular purveyor of mainstream science, Smithsonian Magazine, “This counterintuitive idea was introduced in academic circles several years ago and is broadly accepted today…today the issue is not [about] whether it exists but about the degree to which it does.”  We see it “in the workplace, on the playing field, at home, in the air (“Buckle Up Your Seat Belt and Behave,” April 2009, by William Ecenberger).”

The implications of this research for even so widely venerated a government policy as mandatory seat-belt use are startlingly negative. People inclined to use seat belts are unaffected by the laws, but unwilling wearers who are forced to buckle up are presumably risk-loving types. When their seat belts are firmly in place, they will take more driving risks – after all, they must have had a reason for refusing the belt in the first place and risk-preference is the logical explanation. It follows, then, that they must feel safer when buckled in, which implies that they will try to return to their preferred status of risk tolerance. And studies of seat-belt mandates by economists do tend to show this result.

Risk compensation is so widely accepted among scientists outside of government that a Canadian psychologist has carried it to a logical extreme. Gerald J. S. Wilde propounds the philosophy of risk homeostasis, which posits that human beings automatically adjust their behavior to keep their exposure to risk at a constant level, just as the human body regulates its internal temperature at 98.6 degree Fahrenheit despite variations in external conditions.

The Economic View of Risk

We need not carry belief in adjustment to risk this far in order to recognize the futility of government attempts to fit society into a one-size-fits-all risk-free straitjacket. Not only is it a blatant violation of freedom and free markets, it doesn’t even achieve its intended objectives. It is wrong in theory and wrong in practice.

Risk is not an unambiguous bad thing. It is an unavoidable fact of life toward which different people take widely varying attitudes. For some people, risk is a benefit in and of itself. For practically everybody, risk is a by-product of other beneficial products and activities. Free markets give the most scope for the satisfaction of those different attitudes by allowing the risk-averse to avoid risk and the risk-loving to embrace it – and enabling both groups to do so efficiently via the price system.

Those who claim to see a role for government in allowing the risk-averse to avoid risk are practitioners of what Nobel Laureate Ronald Coase calls “blackboard economics.” This is favored by policymakers standing at a figurative blackboard and divorced from the real-world costs and complications of actually putting their government intervention into operation. In practice, risk and safety policies are delegated to regulators who issue orders and run roughshod over markets. The end result benefits regulators by increasing the size and power of government. The rest of us are stuck with obeying the regulations and picking up the tab.

DRI-265 for week of 2-3-13: Women in Combat: What Are the Issues?

An Access Advertising EconBrief:

Women in Combat: What Are the Issues?

Recently the Pentagon announced the dropping of the other shoe on its policy of women in the military. Women have long (since 1994) been deployed to theaters of combat. Now they will be allowed to serve in combat units.

This has stirred up the predictable hornet’s nest of controversy. Mostly, the battle lines form along the familiar boundary between right and left wing – the left wing hailing the announcement as a long-overdue victory for feminism and the right wing stressing the unsuitability of women for combat roles.

On the face of it, this would seem to be grist for the mill of economics. The logical approach – which is another way of describing the way economists view the world – is apparently to allow people to sort themselves into occupational slots according to their personal preferences and productivities. The price of labor, its wage, serves as the yardstick measuring labor’s value at the margin, enabling businesses to compare it with the monetary value of labor’s technical productivity.

Any woman who can produce more value than she costs is hired – simple as that! And indeed, history tells us that competitive markets are the best known antidote to arbitrary forms of discrimination, whether based on race, gender, age or any other factor extraneous to productivity.

Furthermore, there are reasonable grounds to believe that in a free market for labor, some women could pass the physical tests for qualification as combat soldiers. Does this make the Pentagon’s action are step in the right direction, at the very least?

No. The decision is based solely on political considerations, not economic ones. It will probably work badly and cause death, dissension and abdication in the ranks of the armed forces.

Marginal Productivity Theory and Female Soldiers

A commonly heard rationale in opposition to women in combat is that “men are stronger than women.” This generalization is woefully imprecise and virtually meaningless without further definition. In principle, it might mean that every single man is stronger than every single woman – that no woman is stronger than any man. Of course, we know from personal experience that opponents don’t mean that and that this global statement is not true. In fact, there are some indices of strength by which women tend to be stronger than men – using the word “stronger” in its colloquial sense of “stronger on average,” using both the mean value and the median individual as the basis for comparison.

For military combat, upper-body strength is perhaps the most relevant index. Male upper-body strength is indeed superior on average. But some women have sufficient upper-body strength to meet military-qualification standards. Comparison on other relevant criteria, such as aerobic capacity, produces similar results. We know this even without examining military records, simply by observing world records in athletic events involving upper-body and aerobic performance. Women’s records fall short of men’s records, but rank well above average male performance and implicitly exceed the standards set for combat soldiers. It is therefore possible for women to perform the physical functions demanded by combat.

There was a time when the American woman would have been adjudged too delicate, too sensitive to perform an act as brutal as killing another human being hand-to-hand or even using a weapon. That time is long past. (Indeed, reference to it from personal memory dates the age of the speaker at least to the early baby-boom cohort.) The performance of women in combat in Israel, among other countries, establishes that women can kill. The actions of women in American politics over the last half-century demonstrate the same cold calculation, lack of sensitivity and sheer brutality exhibited by men. Women are just as willing to kill for their beliefs as are men.

Pure economic logic says that optimal selection of men and women for combat duty would require equalization of their marginal productivities. That is, whenever another combat soldier is needed, the highest-productivity applicant is picked (male or female) – the limiting case or long-run tendency is toward a stable equilibrium in which productivities tend toward equality. Because mean male strength is so much high higher, this will result in many male soldiers and few female soldiers.

So much for pure economics. Up to this point, why has the military chosen to forego the productivity gains that would have accrued from accepting women in combat?

The Rationale For An All-Male Fighting Force

In a pure market setting, the productivity gains from accepting women in combat would be small because only a few women would actually apply, qualify and serve. Some women capable of qualifying would instead prefer to pursue careers in fields such as athletics, which are much more lucrative. And there have always been compelling arguments against trying to realize those small gains.

In a recent Wall Street Journal op-ed, a onetime combat soldier in Iraq spelled out the brutal realities of life as a combat soldier. Some “grunts” who spearheaded the blitz against Baghdad in 2008 spent 48 consecutive hours racing in a column toward the city. Unable to dismount their vehicles, they had to urinate and defecate in place, in full view of and proximity to their comrades. Forcing men and women to endure this would be to add social strain and humiliation to the already severe strain of combat.

A letter writer to the Journal, also a soldier, pointed out that the inevitable result of coed combat battalions would be pairing off and formation of sexual liaisons. In turn, this would upset the vital cohesion necessary to effective function of the unit by interposing jealousy and envy between squad members. This was not mere speculation on his part, but rather the evidence gathered from coed combat experiments in other countries.

That same kind of evidence argues strongly against the presence of women on the battlefield. The sight of women wounded, threatened with capture and torture, drives male soldiers to commit imprudent acts, thereby jeopardizing the safety and success of their units.

These kinds of disruptions could potentially ruin the effectiveness of a rifle platoon. What’s more, they are only the tip of the iceberg. Admission of women is an open invitation to future allegations of discrimination, sexual harassment and rape. The discrimination can of worms is a wriggling mess of litigation and adverse publicity. The potency of a volunteer force is dependent on successful recruiting, which would be threatened by allegations, scandals and lawsuits. (Indeed, there are already rumblings that thousands of re-enlistments have been jeopardized by the shift in policy.) The risk of such serious losses is not counterbalanced by the small productivity gains accrued by adding women to combat units. That is why the military high command preferred to exclude women entirely from combat roles rather than court potential disaster from the side effects of their presence.

Did this policy “discriminate” against women? Of course. The purpose of creating and maintaining an army is not to give every race, gender, religious affiliation, political party and community organization equal representation among its ranks. The only purpose of an army is to defend the nation as productively as possible. Any combat deployment that achieves that purpose is fair because it delivers on the constitutional guarantee of life, liberty and the opportunity to pursue happiness – for everybody. A job is not and cannot be a property right. And it is consumption that businesses are supposed to provide, not equality of outcomes for people who supply inputs to the businesses. As far as that goes, it would be just as true to say that the policy discriminated against those male soldiers who would have benefitted from close contact with women – just as true and just as irrelevant, for the same reasons.

Women in the Military

Throughout the 20th century, the left wing has distorted the true meaning of concepts like “freedom” and “rights.” The word “freedom” has been used as a euphemism for the concept of power – the power to dictate the terms of trade in what would otherwise be free, voluntary exchanges in free markets. Lack of bargaining power or real income has been wrongly characterized as absence of freedom, calling for government intervention to redress injustice. Inability to work one’s will on others has been misdescribed as an absence of rights, calling for government rules to establish new rights.

Freedom is the absence of coercion, not the ability to impose one’s will on others. A right only exists when its exercise does not reduce someone else’s rights. The issue of women in combat brings these classic fallacies back into action once more.

In the February 6, 2013, issue of Time Magazine, author Darlene Iskra asks rhetorically: “Women In Combat: Is It Really That Big of a Deal?” She poses the question as a false dichotomy between “naysayers” who maintain that “women can’t do combat infantry” and “…dedicated women who only want a chance to serve their country like their male peers” and who believe that “military jobs should be based on performance.” She closes her case with anecdotal histories of a few women who served in the military – as divers, not combat soldiers. In other words, the only issues are biological and political, and the solution is government-imposed equal opportunity.

 

It is true that arguments opposing women in combat are sometimes carelessly put. But every other point made by Ms. Iskra is either dishonest or disingenuous. From the moment the military began admitting women alongside men, its focus began shifting away from maintaining its productivity as a fighting force and toward fulfilling the goals of women as individuals. When women began enlisting, they soon discovered that many of them could not meet the physical standards of performance previously established for the all-male military. When men could pass the physical tests, they were washed out of combat service. But the failure of women produced a different result – a lowering of the standards of acceptance only when applied to women.

This created a climate of cynicism and disillusion, within both the service and the general public. Soldiers realized that the overriding purpose of the military was no longer to defend the nation. Their loyalty was no longer to the consumers of their product, the nation’s civilians. Now some of them were allowed to put their own wants ahead of the defense of the nation. And this attitude potentially put male soldiers’ own lives in jeopardy.

The general public realized that, while all men were created equal, women were created more equal because their wants were given priority over the life, liberty and happiness of civilians. The stage was set for the coup de grace to be administered to the public’s belief in the Rule of Law and equality under the law. It came with the Pentagon’s latest decision.

The dictates of political correctness demand that we rejoice at this great victory for equal rights for women. And most people will doubtless give lip service to that reaction. But deep down, they know that this cannot be the right decision for the nation.

 

The Purpose of a Fighting Force

Proponents of a government-mandated female presence in combat units claim that it is woman’s right to not merely enlist in the military but fight in combat as well. By phrasing the issue in terms of the rights of the soldier, they are implicitly treating an army as an organization created to further the self-expression of its individual members. This attitude strongly resembles that taken by the left-wing toward business and employment in general; namely, that the purpose of a business is to provide both real income and personal fulfillment for its employees. Any other purposes are secondary to these primary goals.

Economics teaches us otherwise. The purpose of a business – its only purpose – is to produce goods and services for consumers. The fact that the business’s goal may be to maximize the profit it earns for its owners doesn’t alter its purpose. The minute consumers stop wanting what it produces, the business stops – what the owners want no longer matters.

The purpose of the military is to defend the nation. The purpose of combat soldiers is to fulfill their employer’s purpose by fighting the nation’s enemies as productively as possible. For most of its history, the soldiers of the United States have been widely considered inferior to those of other nations. This was true throughout World War II, when German troops were generally viewed as the best, and Korea. It was only when America adopted the all-volunteer armed forces – thereby adopting the principles of the free market in recruiting its labor – that U.S. forces became acknowledged as the world’s finest. This should make it easier to see that the military is serving the nation as a producer serves his customers. Its purpose is not to make its employees (the soldiers) happy, any more than a business’s purpose is to make its employees happy. The military’s consumers are the nation; its purpose is to serve them.

The U.S. Constitution was preceded by the Declaration of Independence, the country’s founding document. In it, Thomas Jefferson proclaims our right to “life, liberty and the pursuit of happiness.” It is in order to protect our right to life that government is granted a monopoly on force and violence. A military combat force exists in order to safeguard our right to life by fighting our enemies.

The left wing is putting its radical agenda ahead of the military’s constitutional duty to defend us. In effect, proponents of government-mandated women in combat are saying, “We are perfectly willing to put our abstract notions of gender equality ahead of the Constitution and the safety of the country. If soldiers have to die, quit the military or suffer anguish because of the presence of women in combat, that is a small price to pay for the satisfaction gained from seeing women serve in combat over the objections of the military and parts of the civilian public.”

What is Behind the Pentagon’s Action?

The left wing’s motives are clear. But why has the Pentagon reversed its previous stance on women in combat?

The military finds itself in a precarious situation. Both Democrats and Republicans are desperately looking for spending to cut. Their gaze has come to rest on the military. Each party has its own reasons for this choice. Democrats look upon the military as ipso facto evil, the only part of government that needs to be downsized. Moreover, women are a gigantic interest group – not that every woman endorses the new policy – and this announcement is a politically easy way to placate them.

Republicans would like to reduce the size of government. They are frantic to cut spending – some spending, any spending. But they have had absolutely no luck cutting wasteful spending. Now they find themselves contemplating the defense budget, like a starving man stranded on a desert island who eventually finds himself surreptitiously measuring the body weight and protein content of the only other person on the island.

The military is in no position to enforce its will on either party. It has caved in to the Democrats because the Democrats are the party in power. The Pentagon is a mammoth bureaucracy held hostage. To a bureaucracy, there is no prospect more terrifying than a budget cut. By changing its policy in acquiescence to the Democrats, it is tacitly begging its captor: “If I let you do this to me, you won’t hurt me, will you?”

Who Speaks for the People?

In everything said so far, both sides to the controversy are behaving according to form. The left wing is ignoring economic logic, the general welfare and the Rule of Law in order to further its aims. The right wing is too confused to formulate a coherent argument, despite the fact that it has had plenty of time to get its intellectual house in order on this issue. Bureaucracies – the federal government in general and the Pentagon in particular -are so far acting exactly as we have come to expect.

And the big loser from this resolution of the longtime debate is the American public, whose military defense will suffer with no counterbalancing gain. Who speaks for them?

A dispassionate appraisal yields a depressing finding: Nobody.

DRI-190 for week of 12-30-12: Stereotypes Overturned: Race, Hollywood and the Jody Call

An Access Advertising EconBrief:

Stereotypes Overturned: Race, Hollywood and the Jody Call

The doctrine often referred to as “political correctness” ostensibly aims to overturn reigning stereotypes governing matters such as race. Yet all too often it results in the substitution of new stereotypes for old. Economics relies on reason and motivation rather than political programming to provide answers to human choices. Nothing could be more subversive of stereotypes than that.

What follows is a tale of Hollywood, race and the American military. At the time, each of these elements was viewed through a stylized, stereotypical lens – as they still are to some extent. But in no case did this tale unfold according to type. The reasons for that were economic.

The Movie Battleground (1949)

In 1949, Metro Goldwyn Mayer produced one of the year’s biggest boxoffice-hit movies, Battleground. It told the story of World War II’s Battle of the Bulge as seen through the eyes of a single rifle squad in the 101st Airborne Division of the U.S. Army. In late 1944, Germany teetered on the edge of defeat. Her supreme commanders conceived the idea of a desperate mid-winter offensive to grab the initiative and rock the Allies back on their heels. The key geographic objective was the town of Bastogne, Belgium, located at the confluence of seven major roads serving the Ardennes region and Antwerp harbor. Germany launched an attack that drove such as conspicuous salient into the Allied line that the engagement acquired the title of the “Battle of the Bulge.”

The Screaming Eagles of the 101st Airborne were the chief defenders of Bastogne. This put them somewhat out of their element, since their normal role was that of attack paratroopers. Despite this, they put up an unforgettable fight even though outnumbered ten to one by the German advance. The film’s scriptwriter and associate producer, Robert Pirosh, was among those serving with the 101st and trapped at Bastogne.

Battleground accurately recounted the Battle of the Bulge, including an enlisted man’s view of the legendary German surrender demand and U.S. General McAuliffe’s immortal response: “Nuts.” But the key to the film’s huge box-office success – it was the second-leading film of the year in ticket receipts – was its continual focus on the battle as experienced by the combat soldier.

The men display the range of normal human emotions, heightened and intensified out of proportion by the context. Courage and fear struggle for supremacy. Boredom and the Germans vie for the role of chief nemesis. The film’s director, William Wellman, had flown in the Lafayette Escadrille in World War I and was one of Hollywood’s leading directors of war films, including the first film to win a Best Picture Oscar, Wings.

Some of MGM’s leading players headed up the cast, including Van Johnson, George Murphy, John Hodiak, and Ricardo Montalban. The film was nominated for six Academy Awards and won two, for Pirosh’s story and screenplay and Paul Vogel’s stark black-and-white cinematography. In his motion-picture debut, James Whitmore was nominated for Best Supporting Actor and won a Golden Globe Award as the tobacco-chewing sergeant, Kinnie.

Whitmore provides the dramatic highlight of the film. Starving and perilously low on ammunition, the men of the 101st grimly hold out. They are waiting for relief forces led by General George Patton. Overwhelming U.S. air superiority over the Germans is of no use because fog and overcast have Bastogne completely socked in, grounding U.S. planes. Whitmore’s squad is cut off, surrounded and nearly out of bullets. Advised by Whitmore to save their remaining ammo for the impending German assault, the men silently fix bayonets to their rifles and await their death. Hobbling back to his foxhole on frozen feet, Whitmore notices something odd that stops him in his tracks. Momentarily puzzled, he soon realizes what stopped him. He has seen his shadow. The sun has broken through the clouds – and right behind it come American planes to blast the attacking German troops and drop supplies to the 101st. The shadow of doom has been lifted from “the battered bastards of Bastogne.”

1949 audiences were captivated by two scenes that bookended Battleground. After the opening credits and scene-setting explanation, soldiers are seen performing close-order drill led by Whitmore. These men were not actors or extras but were actual members of the 101st Airborne. They executed Whitmore’s drill commands with precise skill and timing while vocalizing a cadence count in tandem with Whitmore. This count would eventually attain worldwide fame and universal acceptance throughout the U.S. military. It began:

You had a good home but you left

You’re right!

You had a good home but you left

You’re right!

Jody was there when you left

You’re right!

Your baby was there when you left

You’re right!

Sound Off – 1,2

Sound Off – 3,4

Cadence Count – 1,2,3,4

1,2 – 3-4!

At the end of the movie, surviving members of Whitmore’s squad lie exhausted beside a roadway. Upon being officially relieved and ordered to withdraw, they struggle to their feet and head toward the rear, looking as worn out and numb as they feel. They meet the relief column marching towards them, heading to the front. Not wishing for the men to seem demoralized and defeated, Van Johnson suggests that Whitmore invoke the cadence count to bring them to life. As the movie ends, the squad marches smartly off while adding two more verses to the cadence count, supported by the movie’s music score:

Your baby was lonely as lonely could be

Until he provided company

Ain’t it great to have a pal

who works so hard to keep up morale?

Sound Off – 1,2

Sound Off – 3,4

Cadence Count – 1,2,3,4

1,2 – 3-4!

You ain’t got nothing to worry about

He’ll keep her happy ’till I get out

And I won’t get out ’till the end of the war

In Nineteen Hundred and Seventy-four

Sound Off – 1,2

Sound Off – 3,4

Cadence Count – 1,2,3,4

1,2 – 3-4!

The story of this cadence count, its inclusion in Battleground, its rise to fame and the fate of its inventor and his mentor are the story-within-the-story of the movie Battleground. This inside story speaks to the power of economics to overturn stereotypes.

The Duckworth Chant

In early 1944, a black Army private named Willie Lee Duckworth, Sr., was returning to Fort Slocum, NJ, from a long, tiresome training hike with his company. To pick up the spirits of his comrades and improve their coordination, he improvised a rhythmic chant. According to Michael and Elizabeth Cavanaugh in their blog, “The Duckworth Chant, Sound Off and the Jody Call,” this was the birth of what later came to be called the Jody (or Jodie) Call.

Duckworth’s commanding officer learned of popularity of Duckworth’s chant. He encouraged Duckworth to compose additional verses for training purposes. Soldiers vocalized the words of the chant along with training commands as a means of learning and coordinating close-order drill. Duckworth’s duties exceeded those of composer – he also taught the chant to white troops at Fort Slocum. It does not seem overly imaginative to envision episodes like this as forerunners to the growth of rap music, although it would be just a logical to attribute both phenomena to a different common ancestor.

Who is Jody (or Jodie)? The likely derivation is from a character in black folklore, Joe de Grinder, whose name would have been shortened first to Jody Grinder, then simply to Jody. The word “grind” has a sexual connotation and Jody’s role in the cadence count was indeed been to symbolize the proverbial man back home and out of uniform, who threatens to take the soldier’s place with his wife or girlfriend.

Already our story has turned certain deeply ingrained racial stereotypes upside down. In 1944, America was a segregated nation, not just in the South but North, East and West as well. This was also true of our armed forces. Conventional thinking (as distinct from conventional wisdom) holds that a black Army private had no power to influence his fate and was little more than a pawn under the thumb of larger forces.

Yet against all seeming odds and expectations, a black draftee from the Georgia countryside spontaneously introduced his own refinement into military procedure – and that refinement was not only accepted but wholeheartedly embraced. The black private was even employed to train white troops – at a point when racial segregation was the status quo.

Pvt. Duckworth’s CO was not just any commanding officer. He was Col. Bernard Lentz, the senior colonel in the U.S. Army at that time. Col. Lentz was a veteran of World War I, when he had developed the Cadence System of Teaching Close-Order Drill – his own personal system of drill instruction using student vocalization of drill commands. When Lentz heard of Duckworth’s chant, he immediately recognized its close kinship with his own methods and incorporated it into Fort Slocum’s routine.

The public-choice school of economics believes that government bureaucrats do not serve the “public interest.” Partly, this is because there is no unambiguous notion of the public interest for them to follow. Consequently, bureaucrats can scarcely resist pursuing their own ends since it is easy to fill the object-function vacuum with their own personal agenda. This is a case in which the public interest was served by a bureaucrat pursuing his own interests.

Col. Lentz had a psychological property interest in the training system that he personally developed. He had a vocational property interest in that system since its success would advance his military career. And in this case, there seems to be little doubt that the Duckworth Chant improved the productivity of troop training. Its use spread quickly throughout the army. According to the Cavanaugh’s, it was being used in the European Theater of Operations (ERO) by V-E Day. Eventually, Duckworth’s name recognition faded, to be replaced by that of his chant’s eponymous character, Jody. But the Jody Call itself remains to this day as a universally recognized part of the military experience.

Thus, the stereotypes of racial segregation and bureaucratic inertia were overcome by the economic logic of property rights. And the morale of American troops has benefitted ever since.

Hollywood as User and Abuser – Another Myth Exploded

The name of Pvt. Willie Lee Duckworth, Sr. does not exit the pages of history with the military’s adoption of his chant as a cadence count. Far from it. To paraphrase the late Paul Harvey, we have yet to hear the best of the rest of the story.

As noted above, the Duckworth chant spread to the ETO by early 1945. It was probably there that screenwriter Robert Pirosh encountered it and germinated the idea of planting it in his retelling of the Battle of the Bulge. When Battleground went into production, MGM representative Lily Hyland wrote to Col. Lentz asking if the cadence count was copyrighted and requesting permission to use it in the film.

Col. Lentz replied, truthfully, that the cadence count was not under copyright. But he sincerely requested compensation for Pvt. Duckworth and for a half-dozen soldiers who were most responsible for conducting training exercises at Fort Slocum. The colonel suggested monetary compensation for Duckworth and free passes to the movie for the other six. MGM came through with the passes and sent Pvt. Duckworth a check for $200.

As the Cavanaugh’s point out, $200 sounds like a taken payment today. But in 1949, $200 was approximately the monthly salary of a master sergeant in the Army, so it was hardly trivial compensation. This is still another stereotype shot to pieces.

Hollywood has long been famed in song and story – and in its own movies – as a user and abuser of talent. In this case, the casual expectation would have been that a lowly black soldier with no copyright on a rhyming chant he had first made up on the spur of the moment, with no commercial intent or potential, could expect to be stiffed by the most powerful movie studio on earth. If nothing else, we would have expected that Duckworth’s employer, the Army, would have asserted a proprietary claim for any monies due for the use of the chant.

That didn’t happen because the economic interests of the respective parties favored compensating Duckworth rather than stiffing him. Col. Lentz wanted the Army represented in the best possible light in the film, but he particularly wanted the cadence count shown to best advantage. If Pvt. Duckworth came forward with a public claim against the film, that would hurt his psychological and vocational property interests. The last thing MGM wanted was a lawsuit by a soldier whose claim would inevitably resonate with the public, making him seem to be an exploited underdog and the studio look like a bunch of chiseling cheapskates – particularly when they could avoid it with a payment of significant size to him but infinitesimal as a fraction of a million-dollar movie budget.

A Hollywood Ending – Living Happily Ever After

We have still not reached the fadeout in our story of Col. Lentz and Pvt. Duckworth. Carefully observing the runaway success of Battleground, Col. Lentz engaged the firm of Shapiro, Bernstein & Co. to copyright an extended version of the Duckworth chant in 1950 under the title of “Sound Off.” Both he and Willie Lee Duckworth, Sr. were listed as copyright holders. In 1951, this was recorded commercially for the first of many versions by Vaughn Monroe. In 1952, a film titled Sound Off was released. All these commercial exploitations of “Sound Off” resulted in payments to the two men.

How much money did Pvt. Duckworth receive as compensation for the rights to his chant, you may ask? By 1952, Duckworth was apparently receiving about $1,800 per month. In current dollars, that would amount to an income well in excess of $100,000 per year. Of course, like most popular creations, the popularity of “Sound Off” rose, peaked and then fell off to a whisper. But the money was enough to enable Duckworth to buy a truck and his own small pulpwood business. That business supported him, his wife and their six children. It is fair to say that the benefits of Duckworth’s work continued for the rest of his life, which ended in 2004.

If still dubious about the value of what MGM gave Duckworth, consider this. The showcase MGM provided for Duckworth’s chant amounted to advertising worth many thousands of dollars. Without it, the subsequent success of “Sound Off” would have been highly problematic, to put it mildly. It seems unlikely that Col. Lentz would have been inspired to copyright the cadence count and any benefits received by the two would have been miniscule in comparison.

The traditional Hollywood movie ending is a fadeout following a successful resolution of the conflict between protagonist and antagonist, after which each viewer inserts an individual conception of perpetual bliss as the afterlife of the main characters. In reality, as Ernest Hemingway reminds us, all true stories end in death. But Willie Lee Duckworth, Sr.’s story surely qualifies as a reasonable facsimile of “happily ever after.”

This story is not the anomaly it might seem. Although Hollywood itself was not a powerful engine of black economic progress until much later, free markets were the engine that pulled the train to a better life for 20th century black Americans. Research by economists like Thomas Sowell has established that black economic progress long preceded black political progress in the courts (through Brown vs. Topeka Board of Education) and the U.S. Congress (through legislation like the Civil Rights Act of 1964).

The Movie that Toppled a Mogul

There were larger economic implications of Battleground. These gave the film the sobriquet of “the movie that toppled a mogul.” As Chief Operating Officer of MGM, Louis B. Mayer had long been the highest-paid salaried employee in the U.S. The size of MGM’s payroll made it the largest contributor on the tax rolls of Southern California. Legend had endowed Mayer with the power to bribe police and influence politicians. Seemingly, this should have secured his job tenure completely.

Battleground was a project developed by writer and executive Dore Schary while he worked at rival studio RKO. Schary was unable to get the movie produced at RKO because his bosses there believed the public’s appetite for war movies had been surfeited by the wave of propaganda-oriented pictures released during the war. When Schary defected to MGM, he brought the project with him and worked ceaselessly to get it made.

Mayer initially opposed Battleground for the same reasons as most of his colleagues in the industry. He called it “Schary’s Folly.” Yet the movie was made over his objections. And when it became a blockbuster hit, the fallout caused Mayer to be removed as head of the studio that bore his name. To add insult to this grievous injury, Schary replaced Mayer as COO.

For roughly two decades, economists had supported the hypothesis of Adolf Berle and Gardiner Means that American corporations suffered from a separation of ownership and control. Ostensibly, corporate executives were not controlled by boards of directors who safeguarded the interests of shareholders. Instead, the executives colluded with boards to serve their joint interests. If ever there was an industry to test this hypothesis, it was the motion-picture business, dominated by a tightly knit group of large studios run by strong-willed moguls. MGM and Louis B. Mayer were the locus classicus of this arrangement.

Yet the production, success and epilogue of Battleground made it abundantly clear that it was MGM board chairman Nicholas Schenck, not Mayer, who was calling the shots. And Schenck had his eye fixed on the bottom line. Appearances to the contrary notwithstanding, Louis B. Mayer was not the King of Hollywood after all. Market logic, not market failure, reigned. Economics, not power relationships, ruled.

Thanks to Battleground, stereotypes were dropping like soldiers of the 47th Panzer Corps on the arrival of Patton’sThird Army in Bastogne.

No Happy Ending for Hollywood

Battleground came at the apex of American movies. Average weekly cinema attendance exceeded the population of the nation. The studio system was a smoothly functional, vertically integrated machine for firing the popular imagination. It employed master craftsman at every stage of the process, from script to screen.

Although it would have seemed incredible at the time, we know now that it was all downhill from that point. Two antitrust decisions in the late 1940s put an end to the Hollywood studio system. One particular abomination forbade studios from owning chains of movie theaters; another ended up transferring creative control of movies away from the studios.

The resulting deterioration of motion pictures took place in slow motion because the demand for movies was still strong and the studio system left us with a long-lived supply of people who still preserved the standards of yore. But the vertically integrated studio system has been gone for over half a century. Today, Hollywood is a pale shadow of its former self. Most movies released by major studios do not cover their costs through ticket sales. Studio profits result from sales of ancillary merchandise and rights. Theater profits are generated via concession sales. Motion-picture production is geared toward those realities and targeted predominantly toward the very young. Subsidies by local, state and national governments are propping up the industry throughout the world. And those subsidies must disappear sooner or later – probably sooner.

This has proved to be the ultimate vindication of our thesis that economics, not stereotypical power relationships, governed the movie business in Hollywood’s Golden Age. Free markets put consumers and shareholders in the driver’s seat. The result created the unique American art form of the 20th century. We still enjoy its fruits today on cable TV, VHS, DVD and the Internet. Misguided government attempts to regulate the movie business ended up killing the golden goose or, more precisely, reducing it to an enfeebled endangered species.

DRI-179 for week of 12-23-12: Shoot the Shooter

An Access Advertising EconBrief:

Shoot the Shooter

By this time, few if any Americans can be unaware of the slaughter of 20 elementary schoolchildren and 6 teachers and administrators at the Sandy Hook elementary school in Newtown, CT, on Dec. 15. The perpetrator, 20-year-old Adam Lanza, used a semi-automatic rifle belonging to his mother, whom he killed first of all. The shootings have maintained a stranglehold on the attention of the news media since they occurred, fending off even the “fiscal cliff” for primacy.

The news media, mainstream politicians and the left wing reacted to this horrific act with utter predictability. They all blamed the physical instrument used to commit the crime – a gun – for the purposive acts of the perpetrator. Calls went out for heightened gun control. The word “heightened” is apropos because guns are already the most heavily regulated consumer purchase in America.

The apogee of this predictable reaction was reached with a call by President Barack Obama for legislation to be recommended by a committee headed by Vice-President Joe Biden. The legislation would purportedly be directed at “gun violence,” but this is widely understood as a euphemism for gun control; e.g., further restrictions on the possession, purchase and use of guns.

This is the latest in a string of mass shootings, each of which has received lavish publicity, triggering (no pun intended) similar calls for regulatory screw-tightening. There is a rapidly forming consensus that “this time is different.” The reasons for the difference vary from cumulative disgust (“Enough is enough,” proclaimed President Obama in heralding the formation of his commission) to the ostensible escalation of horror resulting from the murder of children.

This space thoroughly analyzed the last mass shooting (in an Aurora, CO cinema premiering the latest installment in the Batman franchise) and provided the logical response. Not surprisingly, that response was thoroughly ignored – although the evidence has continued to mount in its favor. Now, with the Second Amendment rights of Americans and their very safety at risk as never before, those arguments are well worth rehearsing.

The Problems

There are three problems associated with mass shootings of the Newtown type. Listed in descending order of importance, they are:

The problem of dealing with the shooter. The overarching problem is the fact that a group of people is faced by an armed man intent on killing as many of them as possible – or at least killing until his need or desire to kill has been satiated. The immediate imperative is an emergency of the highest order: to stop the killing as quickly and completely as possible.

The problem of deterring further shootings. Once the killing has been stopped, the highest remaining need in the hierarchy of urgency can be addressed. That is the need to deter further shootings of this type. In criminal justice generally, deterrence is accomplished by apprehension and punishment. Mass shootings present a unique and anomalous case. Apprehension is not a problem because the shooter continues to shoot until interrupted by the arrival of the police and then either commits suicide or (rarely) surrenders. Punishment does not deter because the shooter is obviously fully prepared to die at the scene or, failing that, following conviction. The shooter is someone for whom life holds no further attraction and meaning is reduced to taking random vengeance for the perceived slights he has suffered. Thus the problem of deterrence appears in a peculiar and unique guise.

The problem of uncovering the “root cause” of the shootings; e.g., of discovering the precise motive that constitutes the perception of injury and source of homicidal rage. The ostensible presumption is that this discovery will unlock the door to deterring further shootings.

Mainstream media attention has focused on these problems in inverse order of their actual importance. From the first media reports – long before any of the details of the crime were accurately relayed – the obsessive focus has puzzled over the shooter’s motive. Of course, motive plays a key role in a typical murder investigation, but that is because the murderer’s identity is usually unknown or in dispute. Motive, means and opportunity form the triad of elements necessary to secure a criminal conviction under those circumstances.

That is all too obviously not true here. The shooter is known. Even in the unlikely event of a trial, even given the proverbial difficulty of actually proving simple guilt in a capital case, the issue of motive is surely peripheral to guilt or innocence because the physical circumstances are utterly damning.

If we don’t need to know the shooter’s motive to convict him, why does motive matter? The vague presumption is that if we only knew what makes people do these things, we could prevent them – somehow, some way. That explains the repeated references to “mental illness” as a common denominator among shooters and the blaming of the de-institutionalization policies adopted in the 1970s for allowing time-bomb killers to roam the streets.

“Mental Illness” as Scapegoat

Unfortunately, the mental illness paradigm is doubly disappointing as an answer to the problem of mass shootings. It can neither satisfactorily explain their incidence nor offer the key to deterrence. The term “mental illness” is a throwback to the days of Freudian psychology, before neuroscience came along. The days of belief in “diseases” of the unconscious mind, analogous to diseases of the body but treatable via psychotherapy rather than medicine, are blessedly behind us. What we once called mental illness has gradually revealed itself largely as aberrant brain chemistry, treatable with drugs. Psychiatrists have traded in their couches for a pharmacopeia. Cultural lag has restrained public recognition of the fact that “mental illness” is an obsolete term.

Despite the claims of institutionalization proponents like Dr. E. Fuller Torrey, however, we cannot confidently sort out potentially violent sufferers of (say) bi-polar disorder, let alone distinguish dangerous psychotics from harmless ones. The traditional legal definition of insanity has long been the inability to distinguish right from wrong, but there is little or no reason to believe that today’s mass shooters are insane in this sense, although they may well be mentally ill in the physical sense.

Institutionalization of the mentally ill fell from favor for the very good reason that the practice was routinely and grossly abused. The protections against seizure and detention that all of us take for granted were suspended on supposed medical grounds that we now know to have been all too often spurious. State mental institutions were not always the hellholes depicted in the 1948 movie The Snake Pit, but the shoe fit well enough to touch off a nationwide furor and set events in motion that culminated in the 1970s.

Now that the pendulum of political theater has swung back to focus on mass shootings, the political establishment has whistled up a dragnet for scapegoats and the mentally ill are easy pickings. How many votes do they command, after all? It is much more politically correct to come out as homosexual than as mentally ill. While it may be easy to pretend to solve the problem of mass shootings by stigmatizing a vague class of people that are hard to identify, actually getting results that way is a different story.

The attempt to use mental illness as a scapegoat for mass shootings is really a variant of the old left-wing “root cause” approach to criminology. For decades, garden-variety criminality was excused as the product of sociological deprivation. The only way to fight crime, the left insisted, was to abolish poverty by fighting a “war on poverty.” That war was lost long ago when we discovered that fighting it benefitted the fighters more than the poor and that poverty was a relative, not an absolute, phenomenon. Ironically, the only viable “root-cause” solution is one we refuse to adopt; namely, drug legalization.

The Real Solution

As originally noted in our first discussion of this problem, the most urgent item of business is to neutralize the shooter. The following thought experiment is instructive: Assume that an experienced policeman happens to be on the scene of a mass shooting. What would he do when the shooter produced one or more weapons and opened fire? The answer is blindingly obvious.

He would draw his weapon – policemen are required to carry one even when off duty – and shoot the shooter. There is only one way to handle an armed perpetrator bent on immediate and indiscriminate homicide – by shooting him. The policeman would not try to negotiate with the shooter. He would not call for backup, call for a SWAT team or call for Phillip Morris. And his shots would have only one objective: to kill the shooter. A wounded armed opponent can still kill you and other people in the vicinity.

The crystal clarity of this insight contrasts jarringly with the public refusal of most people – particularly politicians and journalists – to face it. When Wayne LaPierre, President of the National Rifle Association, declared that “the only thing that can stop a bad guy with a gun is a good guy with a gun,” his call to station a policeman in schools was met with derision. A typical reaction from academia was that return fire from police would increase risk by increasing the number and sources of fired bullets that might injure students.

That a response so staggeringly inept could originate with an educator – ostensibly a font of wisdom and reasoned thought – speaks volumes about the degradation of education in general and current public discourse in particular. Failure to shoot the shooter will (as it has in every case to date) allow him to kill his fill of innocent citizens until the police arrive. Return fire, even if ineffectual, will draw the shooter’s attention and shots toward the retaliator and away from the audience, allowing the unarmed to escape.

Another inane argument advanced against retaliating fire is that mass shooters now often sport so-called bullet-proof vests. This is not only true but also quite significant, since it shows that shooters are not too deracinated to carefully plot their crime and anticipate opposition. But the use of (say) a Kevlar vest is no reason not to shoot the shooter. First and foremost, a vest does not protect the shooter’s vulnerable head and neck. Equally telling, a vest-wearing shooter does not continue his work unperturbed like Superman while bullets bounce off his vest harmlessly. A bullet-proof vest is designed to prevent a mortal wound, not to completely overcome all effects of a fired bullet. The impact of a slug from a large-caliber handgun will probably knock down and badly bruise a vest-wearing human target. At the very least, it will allow an audience time to escape and a retaliator time and opportunity to finish him off. (Vest-wearing police normally conduct firefights in pairs or teams and rely on their colleagues for protection when struck.)

The Anti-gun Movement: Cynicism and Hysteria

The foregoing arguments are a sample of how the left wing wages its current fight to control guns. (The word “debate” does not apply to these exchanges since the left wing proffers neither logic nor empirical evidence and makes its points by shouting down the opposition.) The left runs the gamut of emotional reaction from cynicism to hysteria.

President Obama’s reaction to the shooting was political cynicism in its purest (or impurest) form. “Enough is enough,” he intoned solemnly. The nation could no longer afford to indulge the freedoms traditionally accorded gun owners. But enough only became enough after the President’s reelection, not after the previous mass shooting in the Aurora, CO movie theater in July, 2012. Had some sort of cumulative numerical threshold for mass murder been surpassed?

No, the hurdle presented by the President’s reelection had been surpassed; that was the difference in the two situations. Now the President could apply his trusty rule-of-thumb: Never let a crisis go to waste. The President’s black constituency is a dedicated group of gun-bearers. Prior to reelection he could hardly have risked incurring their wrath by threatening their rights and property. Now, with 94% of their votes safely recorded and his tenure secured, he can go back to ignoring their welfare in favor of the hard-left agenda of gun proscription and confiscation.

At the other emotional pole is the hysterical fringe. Their poster boy is British-born Piers Morgan, host of CNN Tonight. His notion of hospitality to guest Larry Pratt, longtime Second Amendment defender and gun educator, was to hurl imprecations at him. Morgan called Pratt an “idiot,” “dangerous” and “an unbelievably stupid man” – all within the space of less than a minute. Later, Morgan asked rhetorically “how many more kids have to die before” more restrictive gun laws are passed.

The reaction to Morgan’s tantrum is instructive. To date, over 70,000 signatories have urged his deportation (!) in an online petition posted to a White House website. The episode is a classic illustration of what F.A. Hayek called absolute or unlimited democracy at work. Opposing sides expend vast quantities of resources to gain political power which, when attained, they then use to deprive the other side of its rights. The left tries to deprive the right of the right to self-defense; the right tries to deprive the left of freedom of movement.

Readers of the world-famous British weekly The Economist know how Morgan came by his arrogant tunnel vision. The magazine noted that mass shootings in Great Britain and Tasmania in 1996 led directly to a ban of most private handgun ownership in Great Britain and a ban on most semi-automatic weapons in Australia. “If similar laws had been in effect in Sandy Hook,” the magazine piously declared, “some of those lost might have survived.” In fact, England’s gun ban was followed by an epidemic of gun-related violence. Handgun crime doubled and English police began carrying guns for the first time. In Australia, assaults – particularly sexual assaults – went up dramatically following the bans, while homicides continued a modest decline that started prior to the ban.

A once-great magazine has sunk to unimagined depths of demagoguery and incompetence. Bad enough to have refused to face the truth of a single historical example, but The Economist has turned its eyes away from 25 years of pathbreaking social and economic research spanning the globe.

Guns are the Answer, not the Problem

The left-wing movement for gun control was sparked by the political assassinations of the 1960s and turbo-charged by the attempted assassination of President Reagan and his press secretary in 1981. Serious research into the incidence of gun ownership and violence followed later in that decade. Gary Kleck, a liberal academic at FloridaStateUniversity, began with the general expectation of documenting the case for gun control. To his great surprise, he found that cases of gun use for self-defense and protection vastly outnumbered cases of criminal use – by a factor of six in 1993, according to his estimates based on a household survey of 5000. Economist John Lott did extensive research on the extension of rights to carry and conceal firearms, finding that rates of violent crimes in general and murder in particular declined when and where these rights were granted. David Kopel was another researcher whose work in this field has been widely noted and cited. The field of research eventually broadened to include worldwide study of violence and mass killings. The latter are not, as often claimed, unique to the United States. They are a trans-national and cross-cultural global phenomenon, perpetrated with and without guns.

As one would expect, critics (i.e., the left wing) did everything but dismember these men in order to discredit them. But those efforts failed, because all Kleck, Lott, Kopel, et al were doing was empirically bolstering a case that was already logically airtight. Even if recorded instances of handgun defensive use were actually outnumbered by numbers of crimes committed using handguns, this doesn’t even start to make a case for gun control, let alone a gun ban. We can never record all the cases in which citizens interrupt a crime in progress by brandishing a handgun. We can never even begin to imagine all the times in which criminals are deterred from crime by the knowledge or the suspicion that the potential victim is armed. It is no accident that mass shootings occur in so-called “gun-free” settings, where guns are available only to criminals, not law-abiding citizens in need of defense.

Gun control and gun bans do virtually no good at all, only bad. They do nothing to prevent mass shootings or, indeed, crime of any kind. Criminals do not obey laws – including gun laws. Ordinary criminals prefer to work with guns whose identifying marks have been erased; these are available in the black market. Black markets in beverage alcohol and recreational drugs developed quickly and massively in response to the combination of widespread demand and official proscription. Minutes after restrictive gun laws or gun bans were officially put on the books, black markets in guns would spring up.

It is both ironic and fitting that the left-wing solution is especially inappropriate in the case of mass shootings. Adam Lanza obtained his weapons illegally. Like other mass shooters, he had access to wealth that he could and would have used to acquire guns in the black market had they been illegal. Mass shooters are the last people in the world to be deterred by the high price and inconvenience of black-market transactions; after all, they are preparing to leave this world. They face only one possible deterrent – the possibility that they cannot execute their plan to kill large numbers of people. The only roadblock to that plan is the presence on site of somebody with a gun to shoot them.

Economists use two Latin phrases that explain the fallacy under which gun controllers operate. Gun bans implicitly assume a condition of ceteris paribus (“all other things the same or unchanged”); the left believes that they can ban guns without causing huge behavioral responses by the public. But economic reality follows the principle of mutatis mutandis (“let those things change that will change”); behavioral changes will accompany severe gun restrictions. Those changes will create black markets that will neutralize the effects of the gun restrictions and wreak havoc on our lives. Criminals will have guns but law-abiding citizens will not have them for self-defense. So, law-abiding citizens will have to become criminals in order to protect themselves.

It would be bad enough if gun control and gun bans were only ineffectual, if the left wing were guilty only of good intentions gone wrong. But the truth is much worse. It indicts the left of exactly the crime of which they accuse gun owners and the NRA – indifference to the fate of innocent children and adults. Guns themselves are the solution – the only solution – to the immediate problem posed by gun-related violence. The police recognize that; in response to the increased firepower utilized by drug cartels, the police have become virtually paramilitary in size, scope and technique.

Police in the Schools?

The proposal put forward by Wayne LaPierre of the NRA is a perfect reflection of the zeitgeist. In these times, the only politically way to oppose a big-government power grab is to respond with a Newtonian equal-and-opposite-reaction – your own big-government counter-proposal. That is what the NRA has done. Presumably they did it for political reasons, because they believe that putting somebody in authority behind the gun will somehow soften or sanctify a reaction that would otherwise be objectionable. Predictably, this did not work. The left wing reacted just as emotionally as if the NRA had proposed installing a Tea-Party-certified marksman in each school. The same left-wing media figures who recoil in horror from armed police in public schools send their own children to private schools like Sidwell Friends, which employ armed guards.

Now the right wing is stuck with its own big-government proposal, made in the heat of panic. The vague notion that each policeman is somehow well-versed in the care and handling of firearms is periodically dispelled when a gaggle of policemen take a dozen shots to dispatch a “dangerous” neighborhood pit bull or expend fifty rounds or so inside a bar or into the body of an unarmed suspect. These days, the real experts on guns are detailed to SWAT, where they are much too valuable on drug patrol to be wasted as public-school monitors.

The likely government alternative to the police would be the HSA, another unlikely source of genuine protection. Retired military veterans are the only source of actual expertise in weapons and combat who might be available for this duty. As one might expect, the best way to handle the problem of mass shootings in schools is to stop the government from getting involved.

But stopping the government from getting involved in something – anything – has now become just about the most difficult thing in the world to do.

DRI-280 for week of 11-11-12: Restaurant-Dish Takeaway and Comparative Economic Systems


An Access Advertising EconBrief:

 Restaurant-Dish Takeaway and Comparative Economic Systems

You are eating dinner in a casual restaurant with a spouse. No sooner does the last forkful of food ascend toward your mouth than your waiter whisks away the plate. His request for permission – “Done with that?” – is purely a formality since the plate is gone before you can object.

You have observed a tendency in recent years for restaurant servers to remove dishes with increasing alacrity. You remark this to your dinner companion who, unlike you, is a non-economist. Her all-purpose explanation of human behavior is binary: Is the object of study a nice guy or not? Nice guys remove dishes quickly so diners have more elbow room to relax.

You are an economist. You believe people act purposefully to achieve their ends. Moreover, you are thoroughly acquainted with tradeoffs. You have often had waiters take your plate before you were through with it. Some people bristle when they perceive others constantly hovering over them. There are even those – not you, of course, but boors and gluttons – who eat the food of others after finishing their own. One of these types might just react by snatching back his plate and declaring, a la John Paul Jones, “I have not yet begun to eat!”

The “nice-guy” explanation won’t suffice, since the quick-takeaway approach will suit many people well but others poorly. Restaurants that follow a consistent policy of quick takeaway risk offending some customers. Offending customers is not something restaurants do lightly. In order to make this risk worthwhile, there should be some strong motivation in the form of a compensating prospect of gain. What might that be?

One way to define an economist is by saying that they are the kind of people who ask themselves questions like this. And the mark of a good economist is that he can supply not only answers but also further implications and ramifications for social life and government policy.

The Economics of Restaurant Service

Americans have eaten in restaurants ever since America became the United States and before that. While the basic concepts underlying the restaurant sector have remained intact, structural changes have remade the industry in recent decades. The most important contributor has been the institution of franchising.

Fast-service franchising began was begun in the 1920s by A&W root-beer stands and Howard Johnson motel-restaurants. Baskin Robbins, Dairy Queen and Tastee Freeze hopped on the bandwagon in the 1930s and 40s. McDonald’s and Subway became big business in the 1950s. The decade of the 1960s saw restaurant franchises zoom to over 100,000 in number. After overcoming legal challenges posed by antitrust and the economic threat of OPEC in the 70s, franchising became the dominant form of restaurant business organization in the 1980s.

Franchising enlarged markets and made competitive entry easier. By standardizing both product and service, it made restaurant operation easier. It raised the stakes involved in success and failure. All these increased the intensity of competition. In turn, this shone the spotlight on even the minutest aspects of restaurant operation. Franchises and food groups ran schools in which they taught their franchisees and managers the fundamentals of restaurant success. Managers went out on their own to put those principles into practice. The level of professional operation ratcheted upward throughout the industry.

The word “professional” means numerous things, but in context it refers to the rigorous, even relentless application of restaurant practices single-mindedly aimed at achieving profitable operation. This entails developing a repeat-customer base and making the largest profit possible from serving that base.

Whether the quality of all types of restaurant food improved is open to debate, but it cannot be doubted that average quality rose. Today, the “greasy spoons” of yesteryear are nearly as scarce as passenger pigeons.

It was during this period of franchise domination that the practice of quick takeaway gained widespread currency. Maximizing the daily turnover of the given restaurant capacity is a commandment in the operations bible for profit-maximization. Minimizing the time between the departure of one set of guests and the arrival of their successors at each table is one way to maximize turnover. One way to reduce the time taken by clearing tables at meal’s completion is to begin the process before departure rather than waiting until the guests get up to leave; that way, fewer dishes remain to remove upon actual departure.

Fast removal of dishes not only maximizes turnover, it also maximizes the revenue take from each separate turnover. From the restaurant owner’s perspective, maximizing the size of each table’s check is another step toward maximizing total profit. After-dinner items like coffee and dessert are the obvious route to that goal. (Alcoholic drinks are the before-dinner complement of this strategy, which is why attainment of a liquor license is a coveted goal for most restaurants.) Quick takeaway aids this strategy in two ways. First, it speeds the transition from dinner to dessert. Second, it aids the server, who is in no position to handle dish removal when arriving at the table laden with desserts.

“Quick takeaway” has been standard practice throughout most of the industry for quite awhile, though. This doesn’t account for a recent speedup. For that, look deeper into the details of restaurant operation.

Table Size, Takeaway and… Demographic Trends?

Concomitant with the trend toward faster takeaway, the economist has also observed a trend toward smaller tables and booths in casual restaurants. Tables, chairs and booths come in standard sizes (there are five different booth sizes, for example), but the observed trend has been toward more booths designed to accommodate two people. Greater usage has been made of bar areas to provide food service, wherein diners can often obtain quicker service at the cost of table space and chairs limited to two people.

To understand the rationale for this changeover, pretend for a moment that all of the restaurant’s patronage consists of parties of two. Larger tables and booths would waste space and unnecessarily limit revenue per turnover, whereas designing for two would maximize the number of people served (and revenue collected) from an individual full-house turnover.

The link between table size and quick takeaway is obvious. Smaller table and booth sizes leave less room to accommodate elbows, books, newspapers, miscellaneous articles – not to mention additional dishes like dessert. (Technically, a smaller table doesn’t mean less room per person, but the whole idea behind the move to smaller tables is to achieve better utilization of capacity – the result leaves much less unused space available than did the larger tables and booths.) Now servers have even more reason to get those vacated dishes moving back to the kitchen, since there was barely room for them on the table to begin with. This reinforces the preexisting motivation for fast table-clearing and enlists the diners’ sympathy on the side of management, since table-crowding has become all too obvious.

There is still one major link left out of the chain of reasoning. In practice, restaurant parties do not consist entirely of twosomes. Casual restaurants usually include a few larger tables and/or booths, but what is to prevent larger parties from dominating smaller ones in the great scheme of things?

The last four decades have seen an increasing demographic trend toward smaller U.S. household size. In 1970, there an average of 3.1 people comprising the average U.S. household. By 2000, this had fallen to 2.62; by 2007, to 2.6 and by 2010, to 2.59.

Several forces drove this trend. First has been a shrinking birthrate. Here the U.S. is merely following the lead of other Western industrialized nations, which have seen shrinking birthrates throughout the 20th century. In the U.S., the shrinkage has waxed and waned since the 1930s. The 1990s saw a modest resurgence and U.S. births barely struggled above 2.0 per 1,000 early in the millennium. That is the replacement point – the level at which births and deaths counterbalance. As noted by leading demographer Ben Wattenberg and others, the large influx of Hispanic immigrants in recent decades undoubtedly spearheaded this comeback. Hispanics tend to be Catholic, fecund and pro-life. But since 2007, the rate has backslid down to 1.9; even the Hispanics seem to have assimilated the American cultural indifference to reproduction.

Other cultural forces have reinforced demography. Birth control has become omnipresent and routine. Divorce and illegitimacy have lost their stigma, thereby conducing to households containing only one parent. Whereas formerly it was commonplace for two men or two women to room together and share expenses, the legal status granted to homosexual partnerships has now placed a question mark around those arrangements. (This applies particularly to males; apparently the politically correct status conferred upon homosexuals does not much reassure two heterosexual men who contemplate cohabitation.) Indeed, it is today less socially questionable for unmarried male/female couples to live together than for same-sex couples – but this is practical only as a substitute for marriage, so its effect on household size is negligible.

The aggregate effect of this cultural attrition has been nearly as potent at the declining birthrate. In 1970, the fraction of households containing one person living alone was 17%. By 2007, this had risen to 27%.

Given this trend toward declining household size, we would expect to see a corresponding decline in the average size of parties at casual restaurants. After all, households (particularly adults) typically dine together rather than separately. Certainly, large groups do assemble on special occasions and regular get-togethers. But the overall trend should follow this declining pattern.

And there you have it. Smaller average household size produces smaller restaurant table and booth size, which in turn produces quick – or rather, quicker – takeaway of dishes at or before meal completion.

Many people instinctively reject this kind of analysis because they can’t picture most restaurant owner and employees thinking this deeply about such minute details or putting their plans into practice. But the foregoing analysis doesn’t necessarily assume that all restaurant owners and managers are this single-minded and obsessive. In a hotly competitive environment, the restaurants that survive and thrive will be those that do take this attitude. They will attract more business – thus, the odds of encountering smaller tables and quick takeaway will be greater even though those practices may not be uniform across the industry. Indeed, this reasoning supports the very notion of profit maximization itself. This survivorship principle was pioneered by the great economist Armen Alchian.

The Larger Meaning of Little Details

Economics is capable of supplying answers to life’s quaint little questions. (Some people would rearrange the wording of that sentence to “quaint little answers to life’s questions.”) But economics was developed to tackles bigger issues. It turns out that the little questions bear on the big ones.

One of the big questions economists ask about the behavior of business firms is: Is it socially beneficial? Business firms exist because, and to the extent that, they produce goods and services cheaper and better than individual households can. The gauge of success is the welfare of consumers.

Smaller tables and quick takeaway enable restaurants to achieve better capacity utilization. This enables them to cut costs and serve more customers. These are beneficial to consumers. The more intense competition serves to lower prices of restaurant food. This also benefits consumers.

What about the quality of food served? Table size and dish removal do not bear directly on this question, but the industry shift towards corporate control and franchised ownership has sometimes been blamed for a supposed decline in overall food quality. This hypothesis overlooks the analytical nose on its face – the fact that consumers themselves are the only possible judges of quality. Even if we assume that average quality has fallen, we have no basis for second-guessing the willingness of consumers to trade off lower quality for lower price and greater quantity. This is the same sort of tradeoff we make in every other sphere of consumption – housing, clothing, entertainment, medical care, ad infinitum.

The Left wing has recently developed a variation on its theme of corporate malignity in production and distribution of food. Corporations are destroying the health of their customers by purveying food containing too much sugar, salt, fat and taste. Only stringent government regulation of restaurant operations can hope to counteract the otherwise-irresistible lure of corporate advertising and junk food.

This hypothesis is not merely wrongheaded but wrong on the facts. Consumers have every right to trade off lower longevity for heightened enjoyment of life. This is something people often do in non-nutritive contexts such as athletics, extreme leisure pursuits like hang-gliding or public-service activities like missionary work. History indicates that, far from promoting public health, government has aided and abetted the increased incidence of type-II diabetes through wrong-headed dietary insistence on carbohydrate consumption as the foundational building block of nutrition.

Any objective appraisal must recognize that nowhere on earth can consumers find such abundance and diversity of cuisine as in the United States of America. World cuisine is amply represented even in mid-size metropolitan markets like Kansas City, Missouri and Sioux City, Iowa. There is no taste left unfulfilled – even the esoteric insistence on vegetarian meals, organic cultivation and free-range animal raising.

Restaurant Regulation

In order to appreciate the operation of a free market for restaurant meals, we need to dial down our level of abstraction and conduct a comparative-systems comparison. Heretofore we have conducted an imaginative exercise: we have explained a piece of restaurant operations under free-market competition. Now we need to envision how that piece would work under an alternative system like socialism.

In a socialist system, public ownership of the means of production dictates thoroughgoing, top-down regulation of business practice. For example, a regulator will pose the questions: How many booths and tables should the restaurant have? How big should they be? How far apart should they be spaced? How many people should we allow the restaurant to serve and how many should be allowed to sit at each table and booth?

In a socialist system, a regulator or group of them will ask this question in a centralized fashion. That is, he will ask it for a large grouping of restaurants – perhaps all restaurants, perhaps all fast-service restaurants, all bar-restaurants, all casual sit-down restaurants and all fine-dining restaurants. Or perhaps regulators will choose to group the restaurant industry differently. But group it they will and regulate each group on a one-size-rule-fits-all basis.

How will the regulator decide what regulations to impose? He will have government statistics at his disposal, such as the information cited above on average household size. It will be up to him to decide which information is relevant and how to apply the aggregate or collective information that governments collect to each individual restaurant being regulated. Even in the wildly unlikely instance that a regulator could actually visit each regulated restaurant, that could hardly happen more than once per year.

As we have just seen, free markets don’t work that way. One of the most misleading of popular perceptions is that free markets are “unregulated.” In reality, they are subject to the most stringent regulation of all – that of competition. But because the regulation part of competition works invisibly, people seem to miss its importance completely.

Instead of waiting for a central authority to certify its product as tasty and wholesome, markets supply their own verdict. Consumers try it for themselves. They ask their friends or take note when opinions are volunteered. They seek out reviews in newspapers, online and on television. When the verdict is unfavorable, bad news travels fast. This applies even more strongly to the aspect of health, by the way. Nothing empties a restaurant quicker than food-borne illness or even the rumor of it – as entrepreneurs know only too well.

In contrast, government health regulation doesn’t move nearly this fast. The cumbersome process of visits by the health inspector, trial-by-checklist followed by re-inspection – a pattern broken only rarely by a shutdown – is a classic example of bureaucracy at work. Political favoritism can affect the choice of inspections and the result. The de facto health inspector is the free market, not the government employee who holds that title.

Competitive regulation is decentralized. In our restaurant example, decisions about table size and restaurant takeaway are not made by a far-off government authority and applied uniformly. They are made on the spot, at each restaurant on a day-by-day basis. Restaurant owners and managers may possibly have the same government-collected information available to regulators, although it seems likely that they will be too busy to spend much time evaluating it. More to the point, though, they will have what the late Nobel laureate F. A. Hayek called “the information of particular time and place.” That is the time- and place-specific information about each particular restaurant that only its owner and managers can mobilize.

Merely because average household size has fallen over the U.S. does not mean that households in each and every individual neighborhood are smaller. It may be the case, for example, that in Hispanic neighborhoods – not gripped by declining birthrates or an epidemic of divorce – average household size has not fallen as it mostly has elsewhere. Those restaurants would not feel the urge to decrease table size and speed up dish collections in line with most restaurants. And well they shouldn’t, since they would serve their particular customers better by not blindly playing follow-the-leader with national trends.

Would centralized regulators pick up on this distinction? No, they would have to be clairvoyant in order to sort out the kind of exceptions that markets automatically catch.

After all, their aggregate statistics simply do not sift the data finely enough to make individual distinctions and differences visible.

But decentralized markets make those individual differences keenly felt by the people most affected. For restaurants, variations in consumer preference are felt by the very people who serve the consumer groups. Changes in demographic trends are witnessed by those whose very livelihoods are at stake. Competitive regulation works because it is on the spot, informed by the exact information needed and directed by the very people – on both sides of the market – with the motivation and expertise needed to make it effective.

Free markets allow participants to collect, disperse and heed information from any source but do not force people to respond to it. They do, however, provide incentives to respond proportionately to the magnitude of the information provided. A huge disruption of the supply of something will produce a big increase in price, suggesting to people that they reduce their consumption of this good a lot. A small decrease in a good’s price will offer a gentle inducement to increase consumption of something but not to go hog wild over it.

Again and again, we find ourselves saying that free markets nudge people in the right direction, towards doing the thing that we would want done if we could somehow magically observe all economic activity and direct by waving a magic wand. Economists laconically define this quality as being “efficient.”

Restaurant Economics and Rational Behavior

This object lesson in restaurant economics reminds us of a perceptive argument for free markets put forward by Hayek. He was responding to longtime arguments put forth by critics on the Left. The same arguments have recently reechoed following the housing bubble, financial crisis and ensuing Great Recession. Free markets may be logical, the critics concede, but only if people are rational. Since people behave irrationally, free markets must fail in practice, however well grounded their principles might be.

Hayek observed that the critics had it backwards. Markets do not require rational behavior by participants in order to function. Instead, markets encourage rational behavior by rewarding those who act rationally and penalizing those who do not. The history of mankind reveals a gradual movement towards more rational behavior; the widely noted reduction in the incidence of warfare is one noteworthy example of this.

The Audience Responds With a Burst of Applause

Can you imagine a nobler progression from the trivially mundane to the globally significant? That is what economists do.

And, by way of gratitude for this insight, your dinner companion rewards you by inquiring: “OK, now explain why restaurants are so stingy with the butter these days.”

DRI-309 for week of 10-21-12: The Economic Logic of Gifts

An Access Advertising EconBrief:

The Economic Logic of Gifts

The approach of the year-end holidays releases a flood of gift-oriented online content. One such article appeared on MSN on October 19. “What Women Want Men Don’t Give,” by Emily Jane Fox, seized the publication of research by American Express and the Harrison Group as an opportunity for male bashing. The full findings, though, don’t provide ammunition to either side in the battle of the sexes. But they do supply grist to the mill of economists.

Economists study the practice of gift-giving carefully. This surprises most people, who view gifts and pecuniary purchases as antithetical behavior. Yet in both theory and practice, gifts loom large in economics.

Most laymen are aware that the bulk of retail sales are expended on holiday gifts during the Christmas season. But they are probably unaware that economists are even more interested in the individual motivations for giving than in their seasonal macroeconomic impact. Senator Jed Sessions recently identified nearly 80 separate federal welfare programs that dispense around $1 trillion annually. Whether cash or in-kind, these disbursements are gifts in the technical sense. War reparations demanded by victor nations from the vanquished have sometimes changed the course of history, the most famous example being the steep debts levied on Weimar Germany by the Allies after World War I. Such reparations are yet another form of gift, this time on the national level. Their effects are proverbial among students of international economics.

The MSN article is best understood as an exercise in the modern practice of rhetorical journalism. That is, it is intended not to report facts but to produce an effect on the reader. Economists study unintended consequences of human action, and in this case the author’s attempt to manipulate her readers has produced surprising revelations about the economic logic of gift-giving.

The Process Frustrates Everybody – Including the Author

The research, sponsored by American Express and a consulting firm called the Harrison Group, surveyed 625 households whose working members ranked in the top 10% of wage-earners by income. The survey questions probed respondents’ preferences in both gift-giving and receiving. The results found “…a wide gap between what people want and what they actually get.” Apparently, the gap occurs because people do not give gifts in accordance with recipients’ wishes. Indeed, they do not even behave the way they themselves want their own gift-givers to act.

The author showcases women as victims of this asymmetry. “Two-thirds of affluent American women want gift cards. But less than a fifth of men will [comply]…Instead, 70% of American women are gifted clothing or jewelry.”

Thus, the article begins in a familiar manner. The reader is presented with stereotypes – woman as practical consumers, men as selfish dinosaurs who persist in their ways heedless of feminine sensitivities. But just when the reader feels able to predict what is coming, the article changes course.

Men, too, suffer the pain of asking without receiving. Men “…want food and alcohol – a third are hoping for gourmet foods and fine wine, and another third want gift cards. But women like to give none of these – 30% are expected to give clothing and another 15% books… What wealthy shoppers are better suited for [is] giving gifts to themselves.”

The author’s frustration seems comparable to that of her subjects. Having begun with one agenda in mind – to reinforce the stereotype of male insensitivity – she runs up against the comparable worthlessness of female behavior. When her gender angle encounters a roadblock, she makes a last-minute detour in the direction of class envy by indicting the self-absorption of “wealthy shoppers.” But she lacks the space to start up another argument and must rest content with allowing the headline to do all her work.

What Do Women Want, Generically Speaking? Does This Differ From Male Wants?

Economists try to interpret facts in light of what they know – or think they know – about human motivation. Can we take the scenario as presented above and make sense of it?

“What do women want?” is an age-old question. Economics does not recognize separate male and female systems of logic, so the apparent frustration felt by the author does not affect them. The fact that men and women behave broadly alike is not shocking. The question is: Is their behavior logically consistent?

Some commenters on the online article showed disdain for the author’s willingness to question the value of a gift. Why not accept it in the (presumably charitable) spirit in which it was offered? That is a question worth tackling.

The exchange of gifts is a ritual dating back many centuries. The distinctive features of holiday gift exchange in the Western world are its reciprocal and quasi-compulsory character. Reciprocity implies that, roughly speaking, the net monetary value of the exchanges can be treated as cancelling out. Consequently, their only real value must be to achieve some sort of efficiency. Otherwise, why bother? When it comes to random gifts, the commenters have a point. The mouth of a non-reciprocal, fully voluntary gift horse is certainly not worth close examination.

There is much to criticize about the holiday gift-giving ritual, though. The custom of giving gifts in-kind runs afoul of the long-recognized economic presumption in favor of gifts in cash. Students of intermediate microeconomics courses are routinely shown the inefficiency of programs like the federal food-stamp program, which subsidizes the consumption of (somewhat) poor people by giving them subsidized food rather than a cash payment of equal value. (Technically, the term “equal value” must refer to the value of the subsidized good – food – that the recipient chooses, which can only be determined after the consumption choice is made on pre-selected terms. But it is easy to show diagrammatically or mathematically that giving the recipient an amount of cash equal to the value of the food they choose could never make them worse off and would probably make them better off.)

The inherent logic behind the demonstration is quite straightforward. The gift confers an increment of real income upon the recipient. An addition to real income creates a willingness to consume a larger amount and/or higher frequency of all normal goods, not merely more of one specific good. Hence, the new optimal basket of consumption goods will include increases in more than just one good or service. Receipt of real income in the form of cash allows maximum scope for distributing the increase among the different possible choices.

Why doesn’t this same logic apply to gifts? The short answer is that it does. Economists have devised various ad hoc explanations to rationalize the practice of in-kind gift giving, but none of them really satisfies. That is why these research results are so unsurprising. Women prefer receipt of gift cards to clothing and jewelry. If the gift cards are issued by specialty stores, they allow the recipient a wider range of choice among the styles, brands and sizes of clothing and jewelry. If the cards are to department stores, they allow even wider branching out to other types of goods and services. There is even a legal market for the exchange of gift cards for cash (at a discount), just as food-stamp recipients once traded stamps for cash illegally at discounts up to 50%.

Gift cards are also among the preferred options of men, but men display more willingness to delegate the shopping for their preferred choices of gourmet foods and liquor. This is probably owing to the traditional division of labor, in which women shop for and prepare food but men often purchase liquor. This is the rare case in which you will be willing to let somebody else make your consumption choices for you – they are an expert and you are not (or may not be).

In the light of this, the oft-expressed nostalgia for the Christmas of childhood is understandable. Children are typically net beneficiaries of the gift-giving ritual, with their gift exports being outweighed in number and value by their imports. This favorable holiday balance of payments casts a rosy glow over the holidays that gradually dims in intensity as increasing export responsibilities accompany the aging process.

Note that there is a role for gender in these research results, all right – just not the invidious one implied by the article’s headline. Another way to consider this matter is to ask whether women’s responses would differ markedly if the gift-giver were another woman, as opposed to a man. (Later, we will consider the significance of the degree of intimacy between giver and recipient.) Assuming the answer is no – and the article made no reference to any such distinction in this research – then the economic logic above is sound.

Modifying for the division of labor, the research results show both men and women displaying the basic utility-maximizing, economic preference for cash or cash substitutes rather than narrow in-kind gifts. What are we to make of the apparent fact that both sexes appear “better suited for giving gifts to themselves?”

Utility Maximization and Selfishness

The article’s author is apparently affronted by the possibility that some of us are better suited for giving gifts to ourselves than to others and she wants us to feel her outrage. She probably likes her chances because her target – the top 10% of wage earners – is a loose proxy for “the wealthy,” who are under assault from many sides these days. The overriding sin committed by the wealthy is alleged to be “greed” or “selfishness.” This has often been likened to the behavioral assumption underlying the economic theory of consumer demand, which is utility maximization. We assume that people try to become as happy as possible.

The equation of utility maximization with selfishness simply won’t wash. For one thing, utility maximization doesn’t say anything one way or the other about other people because the individual’s utility function is assumed to be independent of the consumption of other people. “Independent” means just that. It doesn’t mean that we set out to hurt other people or to studiously ignore them. It just means that our overriding goal is our own happiness.

And in fact it could hardly be any other way. The reality of our internal and external worlds dictates it.

Each of us instinctively recognizes the difficulty of ever really knowing another person as we know ourselves. The closest most of us ever come is through the institution of marriage, yet nearly half of U.S. marriages end in divorce. The same basic conflicts that drive couples apart also militate against optimal gift-giving – budgetary disagreements, differences in tastes and preferences, in maturity and temperament, in perception and grasp of reality.

Looking out for number one has been our evolutionary priority since day one. But ever since man began congregating in groups, an ethic of sacrificing individual wants to the needs of the group was promulgated. This ethic had survival value for the group, although it tended to be hard on particular individuals. And, over time, group leaders became adept at suspending the rules in their own case.

Meanwhile, mankind slowly developed a market process for increasing wealth and happiness. This market process ran counter to the group ethic because it increasingly demanded cooperation with individuals outside the group – indeed, cooperation between individuals who never met or even suspected that they were cooperating. This extended order of cooperation was one of the market’s greatest strengths, since it prevented political, religious or cultural differences from interfering with the growth of wealth and real income.

When the social order consisted of mated pairs living in caves, it was not unreasonable for one mate to control the consumption pattern of the pair. The choices were so few and so starkly simple, the human species so primitive that one could envision coming to anticipate the wants and desires of a spouse to a high degree. Today’s sophisticated world with tens of thousands of consumption choices made by evolved human brains makes nonsense of that concept. Even spouses cannot be expected to read each other’s minds well enough to reach the apex of consumption choice.

In this context, it is worthwhile to observe that women are apparently the ones who pretend to this level of expertise. They refuse to delegate their clothing and jewelry purchases but are more than willing to overrule mens’ consumption choices, to the point of substituting clothes for food and liquor in their “gifting.” (Why not “giving,” by the way?) We accuse government bureaucrats of paternalism, but it would appear that this should instead be maternalism.

It is luminously clear that there can be only one true expert on your consumption, and that is you. Nobody else in the world could begin to accumulate the objective information on the thousands of potential goods that you can or might consume, or the subjective data on your particular tastes, preferences and attitudes towards them. It is sobering to realize that not even a life mate can approach the degree of familiarity needed to truly run your life for you.

Small wonder, then, that “nearly half of women” in the survey “said they were extremely, or very likely to buy themselves presents this holiday season. A third of men have similar intentions.”

It is idiotic to call this behavior selfishness when it merely acknowledges the practical facts of life. We need a vocabulary to describe an inordinate preoccupation with self – the kind displayed by thieves, murderers, embezzlers and the like – and this is the proper preserve of words like “greedy” and “selfish.”

The Theory of Gifts and Public Policy

The economic logic of gifts has important implications for public policy. For over four decades, various researchers have estimated the amount of welfare expenditures necessary to lift every man, woman and child above the so-called poverty line. Then they have compared this irreducible necessary minimum expenditure on fighting poverty with the amount actually spent by federal welfare programs. The ratio between what we spend and what we would theoretically need to spend has fluctuating over time. It has been as low as two and as high as ten. Currently, according to the latest estimate, it is about five.

We are ostensibly trying to eliminate poverty. We are currently spending five times more on indirect ways of doing this than we would need to spend if we simply gave cash directly to poor recipients. And we are failing to achieve the stated objective of eliminating poverty, since even if the value of cash and in-kind subsidies is added to income there are still a substantial number of people living below the poverty line. We know that cash subsidies are more effective at increasing the happiness of recipients than the in-kind subsidies, such as food stamps, in which the federal government specializes.

So why are we still pursuing a horribly wasteful and inefficient policy of fighting poverty and failing instead of implementing a simpler, much cheaper and more efficient policy that will succeed?

Put this way, the answer stands out. It is reinforced by the experience of any classroom teacher who ever explored the issue. In droves, students insist that we cannot afford to give cash to welfare recipients because they will spend the money in unsuitable ways; e.g., ways that the students do not approve of. Expenditure on illicit, mind-altering drugs is the example most often chosen to illustrate the point.

Students persist in this view even after the irrefutable demonstration that current in-kind forms of welfare, such as food stamps in both its former and present incarnations, also allow recipients to increase their expenditure on “other goods” besides the subsidized good. (In-kind subsidies hinder the flexibility of recipients but allow them to buy the same amount of the subsidized good as before with less money, thereby freeing up more regular income for use in buying drugs or other contraband.)

Thus, it is clear that the actual rationale behind the government welfare system is not to improve the welfare of recipients by maximizing their utility. Instead, it is to maximize the utility of taxpayers by allowing them to control the lives of recipients while assuaging their own guilt. Taxpayers are responding to the vestigial evolutionary call of the group ethic that demands individual sacrifice for group preservation, while meeting their own need for utility maximization. They are countenancing interference in the lives of the poor that they would never sit still for in their own lives, and which they resist even in areas as relatively trivial as holiday gift-giving.

Economists look for ways to make everybody better off without making anybody worse off. Eliminating the federal welfare system would end an enormously wasteful and unproductive practice. Research shows that private charity is highly active and more efficient than federal efforts even though substantial taxpayer income is now diverted into federal anti-poverty efforts. If federal programs were ended, more funds would become available for private charitable purposes. Recipients could choose the degree of maternalism they found tolerable and donors could demand or reject maternalism, as they saw fit.

Meanwhile, resources would be freed up at the federal level to produce other things. Dislocations among employees due to agency closures would be no different than layoffs in the private sector due to shifts in consumer demand between different goods and services. Obviously, some federal employees would migrate to private-sector charities, where employment would rise.

Another extension of these principles applies to recent attempts by federal bureaucrats to fine-tune the pattern of consumption by banning or requiring the consumption of particular foods, minerals, vitamins, fats or other substances. In principle, a case might be made for provision of information allowing informed choice by consumers. The problem is that even here, the federal government’s past efforts have worsened the very problems it now purports to solve. But there is no case in favor of allowing government to dictate consumption choices made by citizens because government cannot possibly possess the comprehensive information necessary to verify whether its actions will improve or worsen the welfare of its subjects.

The Economics of Gifts

The research results reported in the MSN article may have frustrated its author, but they are consistent with the economic principle of utility maximization – properly understood. People request the kind and general form of gifts that tend to maximize their utility, but they take exactly the same tack when it comes to giving gifts to others – they tend to maximize their own utility, not that of the recipient. We can fume, fuss, moralize and complain about this behavior, but it is the only practical way to behave. Practical human limitations dictate it.

And when it comes to public policy, it is utterly futile to expect altruism and omniscience to suddenly triumph in an arena where they are even less potent than they are in private life. Private charity has its limitations, but it is best situated to cope with the inherent difficulties involved when one human being tries to help another.

DRI-380 for week of 7-29-12: The OYPI Challenge Returns: Religious Belief, Overpaid CEOs and Payday Loans

Religious Belief, Overpaid CEOs and Payday Loans

Regular readers of this column may recall the OYPI Challenge. The acronym OYPI stands for “Oh Yeah? Prove It!” It questions the validity of popular lore by challenging believers to back their beliefs with action – and cash. Accepting the challenge demands only the courage of one’s convictions, since the challenged beliefs imply the opportunity for easy profits to be made. How much courage does it take to pick up a $1000 bill lying on the sidewalk?

The underlying thesis of the OYPI Challenge is that talk is cheap, and the world is full of people saying things they don’t believe for purposes of political or personal gain. It would be shocking if this thesis were wholly original, and there is evidence to the contrary. In his recent book The Big Questions (2009), well-known economic popularizer Steven Landsburg develops an example that incorporates this fundamental insight implicitly.

Steven Landsburg on the Shakiness of Religious Belief

In the chapter entitled “What Do Believers Believe?” Landsburg casts doubt on the strength of wholesale religious belief. Noting that “the beliefs I go around repeating are the ones I don’t really believe…but when I pass the threshold to actual belief, I stop reviewing the matter,” Landsburg cites the widespread need for religious observance as one indicator of the shakiness of real faith.

He goes further by drawing inferences analogous to those implied in the OYPI Challenge. In principle, believers should commit fewer crimes, since they face punishment in the hereafter, not merely in the here and now. He finds no statistical case to support this proposition. Believers should fear death less, since the possibility (or certainty) of life after death should reduce the loss suffered as a result of death. Once again, Landsburg rejects little or no evidence to support this notion. (Willingness to die for the faith, whether as a Christian martyr or an Islamic suicide bomber, seems decidedly scant.)

The anxiety to publicly engage in “interfaith dialogue” seems similarly suspicious, since it implies indifference to what are purportedly life’s guiding principles. Since religions proffer theories about the origin of the universe, the earth, life and its progression, one might expect that believers would specialize in the study of these matters. But they don’t.

It is true, Landsburg concedes, that some 90% of Americans profess belief in God. But this is suspect because there is so seldom anything of consequence riding on our beliefs or their expression. This explains why people so often give wrong or contradictory answers to pollsters.

From our perspective, the most intriguing thing about Landsburg’s analysis is its generic resemblance to our OYPI Challenge. Landsburg recognizes that public discourse is overrun with insincere and superficial professions of belief. He recognizes the reason why this is true; namely, that expression is costless; e.g., “talk is cheap.” Moreover, people are seldom motivated to probe or challenge their own professions of belief.

Ironically, Landsburg seems not to notice that he has conflated the problems of the existence of God and the origin and purpose of life with the nature and tenets of various organized religions. He seems equally unconscious of the fact that most of the best writing on religion and faith, both secular and theological, has addressed the issues he raises. Landsburg may have overlooked his debt to writers such as C. S. Lewis and Graham Greene, but we should not overlook ours to Landsburg for reinforcing the bedrock logic underlying the OYPI Challenge.

Our OYPI Challenge is objective in character. The result of the challenge is measured in dollars and cents. The believer is challenged to demonstrate financially both the truth of his belief and his confidence in it. Failure – or failure to respond – refutes the belief.

Overpaid CEOs

The current brouhaha over CEO pay owes much to the Occupy movement, which created the artificial distinctions of “1%” and “99%” as a way of dehumanizing and demonizing the possession of great wealth and high income. The greater the separation between “the rich” and the rest, the smaller the number of demons in comparison with the number of those possessed, the greater becomes the volume of outrage generated by the movement. CEOs are a highly visible minority, severely limited in number, whose activities are remote from the experience and sympathies of most people.

The popular theory of CEO overpayment goes something like this: CEOs are employed by corporations, which are inherently evil. Corporate boards of directors are rubber stamps of management, which somehow influences the board to pay the CEO in excess of his or her true worth. The gains to the CEO (and perhaps to the board, through bribery) come at the expense of rank-and-file workers – hence the dichotomy between the 1% and the 99%. This disproportion can be proved by two kinds of comparison: cross-section (U.S. CEOs compared to, say, Japanese CEOs) and time-series (the ratio of CEO-to-worker pay now compared to that in the past).

To those who (claim to) believe this thesis, this is the OYPI Challenge: Start a corporation in competition with one or more whose CEO is “overpaid” according to your criterion. Form a board of directors whose mission is to hire the lowest-paid CEO that can be found. Raise the wages of hourly workers in correspondence with the relative decline in salary paid to the CEO. According to the overpaid CEO hypothesis, one or both of two things should happen: the firm’s productivity and profits will increase because it will recruit higher-quality workers, or the firm will simply enjoy normal profits with a lower-earning CEO and higher-earning workers.

The reasons why nobody bothers to rise to this OYPI Challenge go beyond their skepticism of CEO pay. Part of the problem is the hypothesis being challenged. For example, consider the ambiguity of the phrase “in correspondence with.” If this is interpreted to mean “pay the CEO 15% less than average and pay the workers 15% above the market wage,” its unworkability sticks out like a nose bitten by a bumblebee. The firm would save 15% of a CEO salary but lose 15% of a much-larger wage bill; it would go broke in short order. Furthermore, the implication that all CEO’s are interchangeable but some workers are better than others seems contraindicated by the facts.

On the other hand, the phrase might be interpreted to mean “distribute any savings from CEO pay among workers in the form of hourly wage increases.” But this would mean distributing a few million dollars among thousands of workers over the course of a year’s wage earnings. The gains would be real enough, but negligible in size. Certainly they wouldn’t make a discernible dent in the overall distribution of income even if generalized across an entire economy. So much for the “CEO gains are workers’ losses” component of the overpaid CEO hypothesis.

Why are so many of us dubious about CEO pay but more than willing to endorse multimillion-dollar earnings for professional athletes and entertainers? They experience the value created by movie and rock stars viscerally and personally, while their grasp of CEO impact on the bottom line is shaky. They know the difference between a first-string and second-string quarterback, but the distance separating a first-string CEO from a second-stringer eludes them.

Yet there is a market for corporate managerial talent, just as for athletes and actors. The people who pay CEO salaries are not board members but shareholders. Theirs are the pockets CEO pay comes out of, not the corporation’s hourly workers. Labor is also purchased in a market. It the firm pays too little, it can’t buy the workers it needs. If it pays too much, it goes out of business. CEO pay is unrelated to the firm’s payment of its workers. So much for the “Management controls the board of directors which picks the CEO” component of the overpaid CEO hypothesis. So much for the “CEOs are paid more than their true worth” component of the hypothesis.

Time-series comparisons of CEO and worker pay are not meaningful because there is no technological or economic reason why those ratios should remain steady over time. Over the course of the 20th century, athletes and entertainers increased their earnings tremendously compared to those of their bosses. The reasons for this were both technological and economic. While this was happening, it also became possible for CEOs to add more shareholder value to the firms they managed. Consequently, their salaries and bonus earnings increased accordingly.

Cross-section comparisons between American and Japanese CEOs presumably reflect political and cultural differences that blur the relevant distinctions. But here, as elsewhere, the OYPI Challenge emerges to cut through the murk and clarify the issue: If Japanese firms pay their CEOS less and otherwise perform as well as U.S. firms, there should be more left over for shareholders, the residual claimants of the firm’s earnings. So, a corollary OYPI Challenge is that believers of the overpaid CEO hypothesis should invest in Japanese firms and brandish their above-normal rates of return as proof of their hypothesis. If they can produce them, that is.

The hardest part of dealing with the overpaid CEO hypothesis is not refuting it; it is stating it in a form that is halfway sensible in the first place. But the inescapable truth is that supporters of this hypothesis are implicitly alleging the existence of a free lunch, a $1000 bill just lying there on the sidewalk, waiting to be picked up but languishing all by its lonesome because nobody notices or cares that it’s there. In reality, the overpaid CEO hypothesis is one more myth laid low by the ultimate myth buster, the OYPI Challenge.

Payday Loans

For sheer heart-tearing poignancy, no concert for strings can match a publicity campaign against “payday loan” or high-interest loan companies. High-interest loans are those whose interest rate exceeds that normally carried by bank, finance-company or even pawn loans. The loans are unsecured, which gives them the highest risk.

The term “payday loan” derives from the popular practice of arranging the term of the loan to expire on a future payday, thus insuring that sufficient funds for repayment will be delivered in the borrower’s paycheck. Standard operating procedure is to provide bank account information – account number, routing number, etc. – to the loan company, which then automatically debits the borrower’s bank account for the loan service fee or repayment, whichever is the case.

The “first law of finance” is the inverse relationship between risk and rate of return. The extreme high risk of payday loans comes from the fact that they are unsecured loans made to those who cannot qualify for any lower-cost alternative. Thus, payday loan borrowers typically cannot qualify for (or have maxed out on) a credit card or a consumer finance loan or mortgage, and have exhausted all other borrowing alternatives. The first law of finance postulates the inverse relationship between the rate of return of any asset and its risk. In the case of payday loans, this can mean that borrowers may pay an effective annual interest rate to maturity exceeding 400%. Only true loan sharks – that is, members of organized crime who use physical violence to collect on their loans – charge higher loan rates of interest.

A jeremiad against payday loan firms goes something like this: Evil, greedy lenders prey on poor, unsuspecting borrowers by lending them money at stratospheric, usurious rates of interest. Once sucked in by the irresistible lure of immediate cash, the borrowers are caught in a fatal downward spiral of debt repayment at 400%+ annual rates of interest. The only way to prevent these financial merchants of death from sucking the wealth of the poor into their own pockets is by driving the payday lenders from the market or, at a minimum, capping the interest rates they charge.

The countervailing OYPI Challenge is this: Apparently, payday-lender-enders believe that payday-loan firms have discovered the financial equivalent of the perpetual motion machine – a way to turn the poverty of the poor into their own wealth. OK, prove it – start your own payday loan firm and charge (say) a mere 200% or so effective annual interest rate. You’re lifting a heavy burden on the poor by cutting their costs in half while proving your contention that those dreadfully high payday loan interest rates are indeed abusive, excessive and unnecessary for the operation of a viable high-risk loan firm.

Of course, that is the $64,000 question – do the crusaders really believe their own inflammatory rhetoric? A local talk-radio host in Kansas City, MO recently offered a counterproposal to draconian legislation against payday loan firms; namely, that religious charities should loan money to the poor at a rate of 4%. At this point, one is moved to inquire: Why not 3%? Or 2% Or 1%? Why charge any interest at all?

The charging of interest reflects the phenomenon that economists call “time preference.” People prefer consumption in the present to consumption in the future, and the interest rate is an index of the discount placed on future goods – the farther out, the greater the discount. (If this were not true, the productivity of investment would induce an infinite amount of saving, since it would always be possible to increase the amount of real income available for consumption purposes by saving.) The interest rate charged by a lender must be at least sufficient to assure him or her of consumption opportunities greater in the future than those available today.

 

 

Super-high interest rates reflect the fact that unsecured loans made to low-income, bad-credit borrowers result in extremely high default rates. Thus, interest payments made by current borrowers must be sufficient to compensate for these defaults, which are simply written off by the payday-lending firms. (Mafia loan sharks, by contrast, never write anything off the books and collect in pain, suffering and death what they cannot collect in cash.) Anybody who doubts that 400% interest rates are necessary to insure a profit plus a rate of return commensurate to the risk has simply never been in the business.

In reality, the payday loan business is a highly competitive business just like any other competitive business. The dozens of recent entrants in national and local markets, like Cash America, have succeeded in lowering the effective interest rates somewhat from the norm of $30 per month per $100 borrowed (an annual repayment total including principal of $460). The fact that those rates remain very high is proof that the OYPI Challenge will not be successfully met.

Incredibly, a question often posed by payday-lender-enders is: Why would anybody borrow money at 400%+? Isn’t this presumptive evidence of economic stupidity, justifying community action to save borrowers from themselves? Put this way, the question virtually answers itself. Payday-loan (or high-interest loan) customers are those who need money quickly and lack alternative access to money or credit. Legitimate needs are legion, ranging from home and automotive repairs to pet medical emergencies to avoidance of fees for overcharges and defaults. Really, virtually any sudden need might give rise to a payday loan, and people from every strata of society have sought them.

The truly penetrating questions are never asked. What gives payday-loan critics the right or the hubris to run the lives of borrowers by denying them access to the only form of credit open to them? What gives them the right to virtually run law-abiding businesses out of business? How would they feel if somebody came along and began running their lives based on confused and inaccurate analysis?

The OYPI Challenge Strikes Again

Public discourse is traditionally like the weather; everybody talks about it but nobody does anything about it. Everybody goes on believing what they began believing on the basis of their instincts and emotions. Nobody subjects their beliefs to scrutiny or test.

The OYPI Challenge is truth’s counterattack against the encroachments of habit, superstition and fable. It poses what the great economic historian Deirdre (formerly Donald) McCloskey called “the American Question: If you’re so smart, why ain’t you rich?” In McCloskey’s vein, we might call it “the American comeback: Put up or shut up.” After all, the one distinctive American school of philosophy is pragmatism.