DRI-211 for week of 3-29-15: Which First – Self-Driving Cars or Self-Flying Planes?

An Access Advertising EconBrief: 

Which First – Self-Driving Cars or Self-Flying Planes?

As details of the grisly demise of Lufthansa’s Germanwings flight 9525 gradually emerged, the truth became inescapable. The airliner had descended 10,000 feet in a quick but controlled manner, not the dead drop or death spiral of a disabled plane. No distress calls were sent. It became clear that the airplane had been deliberately steered into a mountainside. The recovery of the plane’s flight data recorder – the “black box” – provided the anticlimactic evidence of a mass murder wrapped around an apparent suicide: the sound of a chair scraping the floor as the flight crew’s captain excused himself from the cabin, followed by the sound of the cabin door closing, followed by the steady breathing of the co-pilot until the captain’s return. The sounds of the captain’s knocks and increasingly frantic demands to be readmitted to the cabin were finally accompanied by the last-minute screams and shrieks of the passengers as they saw the French Alps looming up before them.

The steady breathing inside the cabin showed that the copilot remained awake until the crash.

As we would expect, the reaction of the airline, Lufthansa, and government officials is now one of shock and disbelief. Brice Robin, Marseille public prosecutor, was asked if the copilot, Andreas Lubitz, had – to all intents and purposes – committed suicide. “I haven’t used the word suicide,” Robin demurred, while acknowledging the validity of the question. Carsten Spohr, Lufthansa’s CEO and himself a former pilot, begged to differ: “If a person takes 149 other people to their deaths with him, there is another word than suicide.” The obvious implication was that the other people were innocent bystanders, making this an act of mass murder that dwarfed the significance of the suicide.

This particular mass murder caught the news media off guard. We are inured to the customary form of mass murder, committed by a lone killer with handgun or rifle. He is using murder and the occasion of his death to attain the sense of personal empowerment he never realized in life. The news media reacts in stylized fashion with pious moralizing and calls for more and stronger laws against whatever weapon the killer happened to be using.

In the case of the airline industry, the last spasm of government regulation is still fresh in all our minds. It followed in response to the mass murder of 3,000 people on September 11, 2001 when terrorists hijacked commercial airliners and crashed them into the World Trade Center and the Pentagon. Regulation has marred airline travel with the pain of searches, scans, delays and tedium. Beyond that, the cabins of airliners have been hardened to make them impenetrable from the outside – in order to provide absolute security against another deliberately managed crash by madmen.

Oops. What about the madmen within?

But, after a few days of stunned disbelief, the chorus found its voice again. That voice sounded like Strother Martin’s in the movie Cool Hand Luke. What we have here is a failure to regulate. We’ll simply have to find a way to regulate the mental health of pilots. Obviously, the private sector is failing in its clear duty to protect the public, so government will have to step in.

Now if it were really possible for government to regulate mental health, wouldn’t the first priority be to regulate the mental health of politicians? Followed closely by bureaucrats? The likely annual deaths attributable to government run to six figures, far beyond any mayhem suicidal airline pilots might cause. Asking government to regulate the mental health of others is a little like giving the job to the inmates of a psychiatric hospital – perhaps on the theory that only somebody with mental illness can recognize and treat it in others.

Is this all we can muster in the face of this bizarre tragedy? No, tragedy sometimes gives us license to say things that wouldn’t resonate at other times. Now is the time to reorganize our system of air-traffic control, making it not only safer but better, faster and cheaper as well.

The Risk of Airline Travel Today: The State of the Art

Wall Street Journal Holman Jenkins goes straight to the heart of the matter in his recent column (03/29-29/2015, “Germanwings 9525 and the Future of Flight Safety”). The apparent mass-murder-by-pilot “highlights one way the technology has failed to advance as it should have.” Even though the commercial airline cockpit is “the most automated workplace in the world,” the sad fact is that “we are further along in planning for the autonomous car than for the autonomous airliner.”

How has the self-flying plane become not merely a theoretical possibility but a practical imperative? What stands in the way of its realization?

The answer to the first question lies in comparing the antiquated status quo in airline traffic control with the potential inherent in a system updated to current technological standards. The second answer lies in the recognition of the incentives posed by political economy.

Today’s “Horse and Buggy” System of Air-Traffic Control

For almost a century, air-traffic control throughout the world has operated under a “corridor system.” This has been accurately compared to the system of roads and lanes that governs vehicle transport on land, the obvious difference being that it incorporates additional vertical dimensions not present in the latter. Planes file flight plans that notify air-traffic controllers of their origin and ultimate destination. The planes are required to travel within specified flight corridors that are analogous to the lanes of a roadway. Controllers enforce distance limits between each plane, analogous to the “car-lengths” distance between the cars on roadways. Controllers regulate the order and sequence of takeoffs and landings at airports to prevent collisions.

Unfortunately, the corridor system is pockmarked with gross inefficiencies. Rather than being organized purely by function, it is instead governed primarily by political jurisdiction. This is jarringly evident in Europe, home to many countries in close physical proximity. An airline flight from one end of Europe to another may pass through dozens of different political jurisdictions, each time undergoing a “handoff” of radio contact for air-traffic control between plane and ground control.

In the U.S., centralized administration by the Federal Aviation Administration (FAA) surmounts some of this difficulty, but the antiquated reliance on radar for geographic positioning still demands that commercial aircraft report their positions periodically for handoff to a new air-traffic control boss. And the air corridors in the U.S. are little changed from the dawn of air-mail delivery in the 1920s and 30s, when hillside beacons provided vital navigational aids to pilots. Instead of regular, geometric air corridors, we have irregular, zigzag patterns that cause built-in delays in travel and waste of fuel. Meanwhile, the slightest glitch in weather or airport procedure can stack up planes on the ground or in the air and lead to rolling delays and mounting frustration among passengers.

Why Didn’t Airline Deregulation Solve or Ameliorate These Problems? 

Throughout the 20th century, the demand for airline travel grew like Topsy. But the system of air-traffic control remained antiquated. The only way that system could adjust to increased demand was by building more airports and hiring more air-traffic controllers. Building airports was complicated because major airports were constructed with public funds, not private investment. The rights-of-way, land acquisition costs, and advantages of sovereign immunity all militated against privatization. When air-traffic controllers became unionized, this guaranteed that the union would strive to restrict union membership in order to raise wages. This, too, made it difficult to cope with increases in passenger demand.

The deregulation of commercial airline entry and pricing that began in 1978 was an enormous boon to consumers. It ushered in a boom in airline travel. Paradoxically, this worsened the quality of the product consumers were offered because the federal government retained control over airline safety. This guaranteed that airport capacity and air-safety technology would not increase pari passu with consumer demand for airline travel. As Holman Jenkins puts it, the U.S. air-traffic-control system is “a government-run monopoly, astonishingly slow to upgrade its technology.” He cites the view of the leading expert on government regulation of transportation, Robert Poole of the Reason Foundation, that the system operates “as if Congress is its main customer.”

Private, profit-maximizing airlines have every incentive to insure the safe operation of their planes and the timely provision of service. Product quality is just as important to consumers as the price paid for service; indeed, it may well be more important. History shows that airline crashes have highly adverse effects on the business of the companies affected. At the margin, an airline that offers a lower price for a given flight or provides safer transportation to its customers or gives its customers less aggravation during their trip rates to make more money through its actions.

In contrast, government regulators have no occupational incentive to improve airline safety. To be sure, they have an incentive to regulate – hire staff, pass rules, impose directives and generally look as busy as possible in their everyday operations. When a crash occurs, they have a strong incentive to assume a grave demeanor, rush investigators to the scene, issue daily updates on results of investigations and eventually issue reports. These activities are the kinds of things that increase regulatory staffs and budgets, which in turn increase salaries of bureaucrats. They serve the public-relations interests of Congress, which controls regulatory budgets. But government regulators have no marginal incentive whatsoever to reduce the incidence of crashes or flight delays or passenger inconvenience – their bureaucratic compensation is not increased by improved productivity in these areas despite the fact that THIS IS REALLY WHAT WE WANT GOVERNMENT TO DO.

Thus, government regulators really have no incentive to modernize the air-traffic control system. And guess what? They haven’t done it; nor have they modernized the operation of airports. Indeed, the current system meets the needs of government well. It guarantees that accidents will continue to happen – this will continue to require investigation by government, thus providing a rationale for the FAA’s accident-investigation apparatus. Consumers will continue to complain about delays and airline misbehavior – this will require a government bureau to handle complaints and pretend to rectify mistakes made by airlines. And results of accident investigations will continue to show that something went wrong – after all, that is the definition of an accident, isn’t it? Well, the FAA’s job is to pretend to put that something right, whatever it might be.

The FAA and the Federal Transportation Safety Board (FTSB) are delighted with the status quo – it justifies their current existence. The last thing they want is a transition to a new, more efficient system that would eliminate accidents, errors and mistakes. That would weaken the rationale for big government. It would threaten the rationale for their jobs and their salaries.

Is there such a system on the horizon? Yes, there is.

Free Flight and the Future of Fully Automatic Airline Travel

A 09/06/2014 article in The Economist (“Free Flight”) is subtitled “As more aircraft take to the sky, new technology will allow pilots to pick their own routes but still avoid each other.” The article describes the activities of a Spanish technology company, Indra, involved in training a new breed of air-traffic controllers. The controllers do not shepherd planes to their destinations like leashed animals. Instead, they merely supervise autonomous pilots to make sure that their decisions harmonize with each other. The controllers are analogous to the auctioneers in the general equilibrium models of pricing developed by the 19th century economist Vilfredo Pareto.

The basic concept of free flight is that the pilot submits a flight plan allowing him or her to fly directly from origin to destination, without having to queue up in a travel corridor behind other planes and travel the comparatively indirect route dictated by the air-traffic control system. This allows closer spacing of planes in the air. Upon arrival, it also allows “continuous descent” rather than the more circuitous approach method that is now standard. This saves both time and fuel. For the European system, the average time saved has been estimated at ten minutes per flight. For the U. S., this would undoubtedly be greater. Translated into fuel, this would be a huge saving. For those concerned about the carbon dioxide emissions of airliners, this would be a boon.

The obvious question is: How are collisions to be avoided under the system of free flight? Technology provides the answer. Flight plans are submitted no less than 25 minutes in advance. Today’s high-speed computing power allows reconciliation of conflicts and any necessary adjustments in flight-paths to be made prior to takeoff. “Pilots” need only stick to their flight plan.

Streamlining of flight paths is only the beginning of the benefits of free flight. Technology now exists to replace the current system of radar and radio positioning of flights with satellite navigation. This would enable the exact positioning of a flight by controllers at a given moment. The European air-traffic control system is set to transition to satellite navigation by 2017; the U.S. system by 2020.

The upshot of all these advances is that the travel delays that currently have the public up in arms would be gone under the free flight system. It is estimated that the average error in flight arrivals would be no more than one minute.

Why must we wait another five years to reap the gains from a technology so manifestly beneficial? Older readers may recall the series of commercials in which Orson Welles promoted a wine with the slogan “We sell no wine before its time.” The motto of government regulation should be “we save no life before its time.”

The combination of free flight and satellite navigation is incredibly potent. As Jenkins notes, “the networking technology required to make [free flight] work [lends] itself naturally and almost inevitably to computerized aircraft controllable from the ground.” In other words, the human piloting of commercial aircraft has become obsolete – and has been so for years. The only thing standing between us and self-flying airliners has been the open opposition of commercial pilots and their union and the tacit opposition of the regulatory bureaucracy.

Virtually all airline crashes that occur now are the result of human error – or human deliberation. The publication Aviation Safety Network listed 8 crashes since 1994 that are believed to have been deliberately caused by the pilot. The fatalities involved were (in ascending order) 1, 1, 4, 12, 33, 44, 104 and 217. Three cases involved military planes stolen and crashed by unstable pilots, but of the rest, four were commercial flights whose pilots or copilots managed to crash their plane and take the passengers with them.

Jenkins resurrects the case of a Japanese pilot who crashed hid DC-8 into Tokyo Bay in 1982. He cites the case of the Air Force pilot who crashed his A-10 into a Colorado mountain in 1997. He states what so far nobody else has been willing to say, namely that “last March’s disappearance of Malaysia Airlines 370 appears to have been a criminal act by a member of the crew, though no wreckage has been recovered.”

The possibility of human error and human criminal actions is eliminated when the human element is removed. That is the clincher – if one were needed – in the case for free flight to replace our present antiquated system of air-traffic organization and control.

The case for free flight is analogous to the case for free markets and against the system of central planning and government regulation.

What if… 

Holman Jenkins reveals that as long ago as 1993 (!) no less a personage than Al Gore (!!) unveiled a proposal to partially privatize the air-traffic control system. This would have paved the way for free flight and automation to take over. As Jenkins observes retrospectively, “there likely would have been no 9/11. There would have been no Helios 522, which ran out of fuel and crashed in 2005 when its crew was incapacitated. There would have been no MH 370, no Germanwings 9525.” He is omitting the spillover effects on private aviation, such as the accident that claimed the life of golfer Payne Stewart.

The biggest “what if” of all is the effect on self-driving cars. Jenkins may be the most prominent skeptic about the feasibility – both technical and economic – of autonomous vehicles in the near term. But he is honest enough to acknowledge the truth. “Today we’d have decades of experience with autonomous planes to inform our thinking about autonomous cars. And disasters like the intentional crashing of the Germanwings plane would be hard to conceive of.”

What actually happened was that Gore’s proposal was poured through the legislative and regulatory cheesecloth. What emerged was funding to “study” it within the FAA – a guaranteed ticket to the cemetery. As long as commercial demand for air travel was increasing, pressure on the agency to do something about travel delays and the strain on airport capacity kept the idea alive. But after 9/11, the volume of air travel plummeted for years and the FAA was able to keep the lid on reform by patching up the aging, rickety structure.

And pilots continued to err. On very, very rare occasions, they continued to murder. Passengers continued to die. The air-traveling public continued to fume about delays. As always, they continued to blame the airlines instead of placing blame where it belonged – on the federal government. Now air travel is projected to more-than-double by 2030. How long will we continue to indulge the fantasy of government regulation as protector and savior?

Free markets solve problems because their participants can only achieve their aims by solving the problems of their customers. Governments perpetuate problems because the aims of politicians, bureaucrats and government employees are served by the existence of problems, not by their solution.

DRI-172 for week of 1-18-15: Consumer Behavior, Risk and Government Regulation

An Access Advertising EconBrief: 

Consumer Behavior, Risk and Government Regulation

The Obama administration has drenched the U.S. economy in a torrent of regulation. It is a mixture of new rules formulated by new regulatory bodies (such as the Consumer Financial Protection Bureau), new rules levied by old, preexisting federal agencies (such as those slapped on bank lending by the Federal Reserve) and old rules newly imposed or enforced with new stringency (such as those emanating from the Department of Transportation and bedeviling the trucking industry).

Some people within the business community are pleased by them, but it is fair to say that most are not. But the President and his subordinates have been unyielding in his insistence that they are not merely desirable but necessary to the health, well-being, vitality and economic growth of America.

Are the people affected by the regulations bad? Do the regulations make them good, or merely constrain their bad behavior? What entitles the particular people designing and implementing the regulations to perform in this capacity – is it their superior motivations or their superior knowledge? That is, are they better people or merely smarter people than those they regulate? The answer can’t be democratic election, since regulators are not elected directly. We are certainly entitled to ask why a President could possibly suppose that some people can effectively regulate an economy of over 300 million people. If they are merely better people, how do we know that their regulatory machinations will succeed, however well-intentioned they are? If they are merely smarter people, how do we know their actions will be directed toward the common good (whatever in the world that might be) and not toward their own betterment, to the exclusion of all else? Apparently, the President must select regulators who are both better people and smarter people than their constituents. Yet government regulators are typically plucked from comparative anonymity rather than from the firmament of public visibility.

Of all American research organizations, the Cato Institute has the longest history of examining government regulation. Recent Cato publications help rebut the longstanding presumptions in favor of regulation.

The FDA Graciously Unchains the American Consumer

In “The Rise of the Empowered Consumer” (Regulation, Winter 2014-2015, pp.34-41, Cato Institute), author Lewis A. Grossman recounts the Food and Drug Administration’s (FDA) policy evolution beginning in the mid-1960s. He notes that “Jane, a [hypothetical] typical consumer in 1966… had relatively few choices” across a wide range of food-products like “milk, cheese, bread and jam” because FDA’s “identity standards allowed little variation.” In other words, the government determined what kinds of products producers were allowed to legally produce and sell to consumers. “Food labels contained barely any useful information. There were no “Nutrition Facts” panels. The labeling of many foods did not even include a statement of ingredients. Nutrient content descriptors were rare; indeed, the FDA prohibited any reference whatever to cholesterol. Claims regarding foods’ usefulness in preventing disease were also virtually absent from labels; the FDA considered any such statement to render the product an unapproved – and thus illegal – drug.”

Younger readers will find the quoted passage startling; they have probably assumed that ingredient and nutrient-content labels were forced on sellers over their strenuous objections by noble and altruistic government regulators.

Similar constraints bound Jane should she have felt curiosity about vitamins, minerals or health supplements. The types and composition of such products were severely limited and their claims and advertising were even more severely limited by the FDA. Over-the-counter medications were equally limited – few in number and puny in their effectiveness against such infirmities as “seasonal allergies… acid indigestion…yeast infection[s] or severe diarrhea.” Her primary alternative for treatment was a doctor’s visit to obtain a prescription, which included directions for use but no further enlightening information about the therapeutic agent. Not only was there no Internet, copies of the Physicians’ Desk Reference were unavailable in bookstores. Advertising of prescription medicines was strictly forbidden by the FDA outside of professional publications like the Journal of the American Medical Association.

Food substances and drugs required FDA approval. The approval process might as well have been conducted in Los Alamos under FBI guard as far as Jane was concerned. Even terminally ill patients were hardly ever allowed access to experimental drugs and treatments.

From today’s perspective, it appears that the position of consumers vis-à-vis the federal government in these markets was that of a citizen in a totalitarian state. The government controlled production and sale; it controlled the flow of information; it even controlled the life-and-death choices of the citizenry, albeit with benevolent intent. (But what dictatorship – even the most savage in history – has failed to reaffirm the benevolence of its intentions?) What led to this situation in a country often advertised as the freest on earth?

In the late 19th and early 20th centuries, various incidents of alleged consumer fraud and the publicity given them by various muckraking authors led Progressive administrations led by Theodore Roosevelt, William Howard Taft and Woodrow Wilson to launch federal-government consumer regulation. The FDA was the flagship creation of this movement, the outcome of what Grossman called a “war against quackery.”

Students of regulation observe this common denominator. Behind every regulatory agency there is a regulatory movement; behind every movement there is an “origin story;” behind every story there are incidents of abuse. And upon investigation, these abuses invariably prove either false or wildly exaggerated. But even had they been meticulously documented, they would still not substantiate the claims made for them and not justify the regulatory actions taken in response.

Fraud was illegal throughout the 19th and 20th century and earlier. Competitive markets punish producers who fail to satisfy consumers by putting the producers out of business. Limiting the choices of producers and consumers harms consumers without providing compensating benefits. The only justification for FDA regulation of the type provided for the first half of the 20th century was that government regulators were omniscient, noble and efficient while consumers were dumbbells. That is putting it baldly but it is hardly an overstatement. After all, consider the situation that exists today.

Plentiful varieties of products exist for consumers to pick from. They exist because consumers want them to exist, not because the FDA decreed their existence. Over-the-counter medications are plentiful and effective. The FDA tries to regulate their uses, as it does for prescription medications, but thankfully doctors can choose from a plethora of “off-label” uses. Nutrient and ingredient labels inform the consumer’s quest to self-medicate such widespread ailments as Type II diabetes, which spread to near-epidemic status but is now being controlled thanks to rejection of the diet that the government promoted for decades and embrace of a diet that the government condemned as unsafe. Doctors and pharmacists discuss medications and supplements with patients and provide information about ingredients, side effects and drug interactions. And patients are finally rising in rebellion against the tyranny of FDA drug approval and the pretense of compassion exhibited by the agency’s “compassionate use” drug-approval policy for patients facing life-threatening diseases.

Grossman contrasts the totalitarian policies of yesteryear with the comparative freedom of today in polite academic language. “The FDA treated Jane’s… cohort…as passive, trusting and ignorant consumers. By comparison, [today’s consumer] has unmediated [Grossman means free] access to many more products and to much more information about those products. Moreover, modern consumers have acquired significant influence over the regulation of food and drugs and have generally exercised that influence in ways calculated to maximize their choice.”

Similarly, he explains the transition away from totalitarianism to today’s freedom in hedged terms. To be sure, the FDA gave up much of its power over producers and consumers kicking and screaming; consumers had to take all the things listed above rather than receive them as the gifts of a generous FDA. Nevertheless, Grossman insists that consumers’ distrust of the word “corporation” is so profound that they believe that the FDA exerts some sort of countervailing authority to ensure “the basic safety of products and the accuracy and completeness of labeling and advertising.” This concerning an agency that fought labeling and advertising tooth and claw! As to safety, Grossman makes the further caveat that consumers “prefer that government allow consumers to make their own decisions regarding what to put in their bodies…except in cases in which risk very clearly outweighs benefit” [emphasis added]. That implies that consumers believe that the FDA has some special competence to assess risks and benefits to individuals, which completely contradicts the principle that individuals should be free to make their own choices.

Since Grossman clearly treats consumer safety and risk as a special case of some sort, it is worth investigating this issue at special length. We do so below.

Government Regulation of Cigarette Smoking

For many years, individual cigarette smokers sued cigarette companies under the product-liability laws. They claimed that cigarettes “gave them cancer,” that the cigarette companies knew it and that consumers didn’t and that the companies were liable to selling dangerous products to the public.

The consumers got nowhere.

To this day, an urban legend persists that this run of legal success was owed to deep financial pockets and fancy legal footwork. That is nonsense. As the leading economic expert on risk (and the longtime cigarette controversy), W. Kip Viscusi, concluded in Smoke-Filled Rooms: A Postmortem on the Tobacco Deal, “the basic fact is that when cases reached the jury, the jurors consistently concluded that the risks of cigarettes were well-known and voluntarily incurred.”

In the early 1990s, all this changed. States sued the tobacco companies for medical costs incurred by government due to cigarette smoking. The suits never reached trial. The tobacco companies settled with four states; a Master Settlement Agreement applied to remaining states. The aggregate settlement amount was $243 billion, which in the days before the Great Recession, the Obama administration and the Bernanke Federal Reserve was a lot of money. (To be sure, a chunk of this money was gobbled up by legal fees; the usual product-liability portion is one-third of the settlement, but gag orders have hampered complete release of information on lawyers’ fees in these cases.)

However, the states were not satisfied with this product-liability bonanza. They increased existing excise taxes on cigarettes. In “Cigarette Taxes and Smoking,” Regulation (Winter 2014-2015, pp. 42-46, Cato Institute), authors Kevin Callison and Robert Kaestner ascribe these tax increases to “the hypothesis… that higher cigarette taxes save a substantial number of lives and reduce health-care costs by reducing smoking, [which] is central to the argument in support of regulatory control of cigarettes through higher cigarette taxes.”

Callison and Kaestner cite research from anti-smoking organizations and comments to the FDA that purport to find price elasticities of demand for cigarettes of between -0.3 and -0.7 percent, with the lower figure applying to adults and the higher to adolescents. (The words “lower” and “higher” refer to the absolute, not algebraic, value of the elasticities.) Price elasticity of demand is defined as the percentage change in quantity demanded associated with a 1 percent change in price. Thus, a 1% increase in price would cause quantity demanded to fall by between 0.3% and 0.7% according to these estimates.

The problem with these estimates is that they were based on research done decades ago, when smoking rates were much higher. The authors estimate that today’s smokers are mostly the young and the poorly educated. Their price elasticities are very, very low. Higher cigarette taxes have only a miniscule effect on consumption of cigarettes. They do not reduce smoking to any significant extent. Thus, they do not save on health-care costs.

They serve only to fatten the coffers of state governments. Cigarette taxes today play the role played by the infamous tax on salt levied by French kings before the French Revolution. When the tax goes up, the effective price paid by the consumer goes up. When consumption falls by a much smaller percentage than the price increase, tax revenues rise. Both the cigarette-tax increase of today and the salt-tax increases of the 17th and 18th century were big revenue-raisers.

In the 1990s, tobacco companies were excoriated as devils. Today, though, several of the lawyers who sued the tobacco companies are either in jail for fraud, under criminal accusation or dead under questionable circumstances. And the state governments who “regulate” the tobacco companies by taxing them are now revealed as merely in it for the money. They have no interest in discouraging smoking, since it would cut into their profits if smoking were to fall too much. State governments want smoking to remain price-inelastic so that they can continue to raise more revenue by raising taxes on cigarettes.

 

Can Good Intentions Really Be All That Bad? The Cost of Federal-Government Regulation

The old saying “You can’t blame me for trying” suggests that there is no harm in trying to make things better. The economic principle of opportunity cost reminds us that the use of resources for one purpose – in this case, the various ostensibly benevolent and beneficent purposes of regulation – denies the benefits of using them for something else. So how costly is that?

In “A Slow-Motion Collapse” (Regulation, Winter 2014-2015, pp. 12-15, Cato Institute), author Pierre Lemieux cites several studies that attempted to quantify the costs of government regulation. The most comprehensive of these was by academic economists John Dawson and John Seater, who used variations in the annual Code of Federal Regulations as their index for regulatory change. In 1949, the CFR had 19,335 pages; in 2005, this total has risen to 134,261 pages, a seven-fold increase in six-plus decades. (Remember, this includes federal regulation only, excluding state and local government regulation, which might triple that total.)

Naturally, proponents of regulation blandly assert that the growth of real income (also roughly seven-fold over the same period) requires larger government, hence more regulation, to keep pace. This nebulous generalization collapses upon close scrutiny. Freedom and free markets naturally result in more complex forms of goods, services and social interactions, but if regulatory constraints “keep pace” this will restrain the very benefits that freedom creates. The very purpose of freedom itself will be vitiated. We are back at square one, asking the question: What gives regulation the right and the competence to make that sort of decision?

Dawson and Seater developed an econometric model to estimate the size of the bite taken by regulation from economic growth. Their estimate was that it has reduced economic growth on average by about 2 percentage points per year. This is a huge reduction. If we were to apply it to the 2011 GDP, it would work as follows: Starting in 1949, had all subsequent regulation not happened, 2011 GDP would have been 39 trillion dollars higher, or about 54 trillion. As Lemieux put it: “The average American (man, woman and child) would now have about $125,000 more per year to spend, which amounts to more than three times [current] GDP per capita. If this is not an economic collapse, what is?”

Lemieux points out that, while this estimate may strain the credulity of some, it also may actually incorporate the effects of state and local regulation, even though the model itself did not include them in its index. That is because it is reasonable to expect a statistical correlation between the three forms of regulation. When federal regulation rises, it often does so in ways that require corresponding matching or complementary state and local actions. Thus, those forms of regulation are hidden in the model to some considerable degree.

Lemieux also points to Europe, where regulation is even more onerous than in the U.S. – and growth has been even more constipated. We can take this reasoning even further by bringing in the recent example of less-developed countries. The Asian Tigers experienced rapid growth when they espoused market-oriented economics; could their relative lack of regulation supplement this economic-development success story? India and mainland China turned their economies around when they turned away from socialism and Communism, respectively; regulation still hamstrings India while China is dichotomized into a relatively autonomous small-scale competitive sector and a heavily regulated and planned government controlled big-business economy. Signs point to a recent Chinese growth dip tied to the bursting of a bubble created by easy money and credit granted to the regulated sector.

The price tag for regulation is eye-popping. It is long past time to ask ourselves why we are stuck with this lemon.

Government Regulation as Wish-Fulfillment

For millennia, children have cultivated the dream fantasies of magical figures that make their wishes come true. These apparently satisfy a deep-seated longing for security and fulfillment. Freud referred to this need as “wish fulfillment.” Although Freudian psychology has long ago been discredited, the term retains its usefulness.

When we grow into adulthood, we do not shed our childish longings; they merely change form. In the 20th century, motion pictures became the dominant art form in the Western world because they served as fairy tales for adults by providing alternative versions of reality that were preferable to daily life.

When asked by pollsters to list or confirm the functions regulation should perform, citizens repeatedly compose “wish lists” that are either platitudes or, alternatively, duplicate the functions actually approximated by competitive markets. It seems even more significant that researchers and policymakers do exactly the same thing. Returning to Lewis Grossman’s evaluation of the public’s view of FDA: “Americans’ distrust of major institutions has led them to the following position: On the one hand, they believe the FDA has an important role to play in ensuring the basic safety of products and the accuracy and completeness of labeling and advertising. On the other hand, they generally do not want the FDA to inhibit the transmission of truthful information from manufacturers to consumers, and – except in cases in which risk very clearly outweighs benefit – they prefer that the government allow consumers to make their own decisions regarding what to put in their own bodies.”

This is a masterpiece of self-contradiction. Just exactly what is an “important role to play,” anyway? Allowing an agency that previously denied the right to label and advertise to play any role is playing with fire; it means that genuine consumer advocates have to fight a constant battle with the government to hold onto the territory they have won. If consumers really don’t want the FDA to “inhibit the transmission of truthful information from manufacturers to consumers,” they should abolish the FDA, because free markets do the job consumers want done by definitionand the laws alreadyprohibit fraud and deception.

The real whopper in Grossman’s summary is the caveat about risk and benefit. Government agencies in general and the FDA in particular have traditionally shunned cost/benefit and risk/benefit analysis like the plague; when they have attempted it they have done it badly. Just exactly who is going to decide when risk “very clearly” outweighs benefit in a regulatory context, then? Grossman, a professional policy analyst who should know better, is treating the FDA exactly as the general public does. He is assuming that a government agency is a wish-fulfillment entity that will do exactly what he wants done – or, in this case, what he claims the public wants done – rather than what it actually does.

Every member of the general public would scornfully deny that he or she believes in a man called Santa Claus who lives at the North Pole and flies around the world on Christmas Eve distributing presents to children. But for an apparent majority of the public, government in general and regulation in particular plays a similar role because people ascribe quasi-magical powers to them to fulfill psychological needs. For these people, it might be more apropos to view government as “Mommy” or “Daddy” because of the strength and dependent nature of the relationship.

Can Government Control Consumer Risk? The Emerging Scientific Answer: No 

The comments of Grossman, assorted researchers and countless other commentators and onlookers over the years imply that government regulation is supposed to act as a sort of stern, but benevolent parent, protecting us from our worst impulses by regulating the risks we take. This is reflected not only in cigarette taxes but also in the draconian warnings on the cigarette packages and in numerous other measures taken by regulators. Mandatory seat belt laws, adopted by state legislatures in 49 states since the mid-1980s at the urging of the federal government, promised the near-elimination of automobile fatalities. Government bureaucracies like Occupational Safety and Health Administration have covered the workplace with a raft of safety regulations. The Consumer Product Safety Commission presides with an eagle eye over the safety of the products that fill our market baskets.

In 1975, University of Chicago economist Sam Peltzman published a landmark study in the Journal of Political Economy. In it, Peltzman revealed that the various devices and measures mandated by government and introduced by the big auto companies in the 1960s had not actually produced statistically significant improvements in safety, as measured by auto fatalities and injuries. In particular, use of the new three-point seat belts seemed to show a slight improvement in driver fatalities that was more than offset by a rise in fatalities to others – pedestrians, cyclists and possibly occupants of victim vehicles. Over the years, subsequent research confirmed Peltzman’s results so repeatedly that former Chairman of the Council of Economic Advisors’ N. Gregory Mankiw dubbed this the “Peltzman Effect.”

A similar kind of result emerged throughout the social sciences. Innovations in safety continually failed to produce the kind of safety results that experts anticipated and predicted, often failing to provide any improved safety performance at all. It seems that people respond to improved safety by taking more risk, thwarting the expectations of the experts. Needless to say, this same logic applies also to rules passed by government to force people to behave more safely. People simply thwart the rules by finding ways to take risk outside the rules. When forced to wear seat belts, for example, they drive less carefully. Instead of endangering only themselves by going beltless, now they endanger others, too.

Today, this principle is well-established in scientific circles. It is called risk compensation. The idea that people strike to maintain, or “purchase,” a particular level of risk and hold it constant in the face of outside efforts to change it is called risk homeostasis.

These concepts make the entire project of government regulation of consumer risk absurd and counterproductive. Previously it was merely wrong in principle, an abuse of human freedom. Now it is also wrong in practice because it cannot possibly work.

Dropping the Façade: the Reality of Government Regulation

If the results of government regulation do not comport with its stated purposes, what are its actual purposes? Are the politicians, bureaucrats and employees who comprise the legislative and executive branches and the regulatory establishment really unconscious of the effects of regulation? No, for the most part the beneficiaries of regulation are all too cynically aware of the façade that covers it.

Politicians support regulation to court votes from the government-dependent segment of the voting public and to avoid being pilloried as killers and haters or – worst of all – a “tool of the big corporations.” Bureaucrats tacitly do the bidding of politicians in their role as administrators. In return, politicians do the bidding of bureaucrats by increasing their budgets and staffs. Employees vote for politicians who support regulation; in return, politicians vote to increase budgets. Employees follow the orders of bureaucrats; in return, bureaucrats hire bigger staffs that earn them bigger salaries.

This self-reinforcing and self-supporting network constitutes the metastatic cancer of big government. The purpose of regulation is not to benefit the public. It is to milk the public for the benefit of politicians, bureaucrats and government employees. Regulation drains resources away from and hamstrings the productive private economy.

Even now, as we speak, this process – aided, abetted and drastically accelerated by rapid money creation – is bringing down the economies of the Western world around our ears by simultaneously wreaking havoc on the monetary order with easy money, burdening the financial sector with debt and eviscerating the real economy with regulations that steadily erode its productive potential.

DRI-228 for week of 10-5-14: Can We Afford the Risk of EPA Regulation?

An Access Advertising EconBrief:

Can We Afford the Risk of EPA Regulation?

Try this exercise in free association. What is first brought to mind by the words “government regulation?” The Environmental Protection Agency would be the answer of a plurality, perhaps a majority, of Americans. Now envision the activity most characteristic of that agency. The testing of industrial chemicals for toxicity, with a view to determining safe levels of exposure for humans, would compete with such alternative duties as monitoring air quality and mitigating water pollution. Thus, we have a paradigmatic case of government regulation of business in the public interest – one we would expect to highlight regulation at its best.

One of the world’s most distinguished scientists recently reviewed EPA performance in this area. Richard Wilson, born in Great Britain but long resident at HarvardUniversity, made his scientific reputation as a pioneer in the field of particle physics. In recent decades, he became perhaps the leading expert in nuclear safety and the accidents at Three Mile Island, Chernobyl and Fukushima, Japan. Wilson is a recognized leader in risk analysis, the study of risk and its mitigation. In a recent article in the journal Regulation (“The EPA and Risk Analysis,” Spring 2014), Wilson offers a sobering explanation of “how inadequate – and even mad and dangerous – the U.S. Environmental Protection Agency’s procedures for risk analysis are, and why and how they must be modified.”

Wilson is neither a political operative nor a laissez-faire economist. He is a pure scientist whose credentials gleam with ivory-tower polish. He is not complaining about excesses or aberrations, but rather characterizing the everyday policies of the EPA. Yet he has forsworn the dispassionate language of the academy for words such as “mad” and “dangerous.” Perhaps most alarming of all, Wilson despairs of finding anybody else willing to speak publicly on this subject.

The EPA and Risk 

The EPA began life in 1970 during the administration of President Richard Nixon. It was the culmination of the period of environmental activism begun with the publication of Rachel Carson’s book Silent Spring in 1962. The EPA’s foundational project was the strict scrutiny of industrial society for the risks it allegedly posed to life on Earth. To that end, the EPA proposed “risk assessment and regulations” for about 20 common industrial solvents.

How was the EPA to assess the risks of these chemicals to humans? Well, standard scientific procedure called for laboratory testing that isolate the chemical effects from the myriad of other forces impinging on human health. There were formidable problems with this approach, though. For one thing, teasing out the full range of effects might take decades; epidemiological studies on human populations are commonly carried out over 10 years or more. Another problem is that human subjects would be exposed to considerable risk, particularly if dosages were amped up to shorten the study periods.

The EPA solved – or rather, addressed – the problem by using laboratory animals such as rats and mice as test subjects. Particularly in the beginning, few people objected when rodents received astronomically high dosages of industrial chemicals in order to determine the maximum level of exposure consistent with safety.

Of course, everybody knew that rodents were not comparable to people for research purposes. The EPA addressed that problem, too, by adjusting their test results in the simplest ways. They treated the results applicable to humans as scalar multiples of the rodent results, with the scale being determined by weight. They assumed that the chemicals were linear in their effects on people, rather than (say) having little or no effect up to a certain point or threshold. (A linear effect would be infinitesimally small with the first molecule of exposure and rise with each subsequent molecule of exposure.)

Of all the decisions made by EPA, none was more questionable than the standard it set for allowable risk from exposure to toxic chemicals. The standard set by EPA was no more than one premature death per million of exposed population over a statistical lifetime. Moreover, the EPA also assumed the most unfavorable circumstances of exposure – that is, that those exposed would receive exposure daily and get the level of exposure that could only be obtained occupationally by workers routinely exposed to high levels of the substance. This maximum safe level of exposure was itself a variable, expressed as a range rather than a single point, because the EPA could not assume that all rats and mice were identical in their response to the chemicals. Here again, the EPA assumed the maximum degree of uncertainty in reaction when calculating allowable risk. As Wilson points out, if the EPA had assumed average uncertainty instead, this would have reduced their statistical risk to about one in ten million.

It is difficult for the layperson to evaluate this “one out of a million” EPA standard. Wilson ties to put it in perspective. The EPA is saying that the a priori risk imposed by an industrial chemical should be roughly equivalent to that imposed by smoking two cigarettes in an average lifetime. Is that a zero risk? Well, not in the abstract sense, but it will do until something better comes along. Wilson suggests that the statistical chance of an asteroid hitting Earth is from 100 to 1000 times greater than this. There are several chemicals found in nature, including arsenic and mercury, whose risk of death to man is each about 1,000 times greater than this EPA-stipulated risk. Still astronomically small, mind you – but vastly greater than the arbitrary standard set by the EPA for industrial chemicals.

Having painted this ghastly portrait of your federal government at work, Wilson steps back to allow us a view of the landscape that the EPA is working to alter. There are some 80,000 industrial chemicals in use in the U.S. Of these, about 20 have actually been studied for their effects on humans. Somewhere between 10,000 and 20,000 chemicals have been tested on lab animals using methods liThat means that, very conservatively speaking, there are at least 60,000 chemicals for which we have only experience as a guide to their effects on humans.

What should we do about this vast uncharted chemical terrain? Well, we know what the EPA has done in the past. A few years ago, Wilson reminds us, the agency was faced with the problem of disposing of stocks of nerve gas, including sarin, one of the most deadly of all known substances. The agency conducted a small test incineration and then looked at the resulting combustion products. When it found only a few on its list of toxic chemicals, it ignored the various other unstudied chemicals among the byproducts and dubbed the risk of incineration to be zero! It was so confident of this verdict that it solicited the forensic testimony of Wilson on its behalf – in vain, naturally.

Wilson has now painted a picture of a government agency gripped by analytical psychosis. It arrogates to itself the power to dictate safety to us, imposes unreal standards of safety on chemicals it studies – them arbitrarily assumes that unstudied chemicals are completely safe! Now we see where Wilson’s words “mad and dangerous” came from.

Economists who study government should be no more surprised by the EPA’s actions than by Wilson’s horrified reaction to them. The scientist reacts as if he were a child who has discovered for the first time that his parents are capable of the same human frailties as other humans. “Americans deserve better from their government. The EPA should have a sound, logical and scientific justification for its chemical exposure regulations. As part of that, agency officials need to accept that they are sometimes wrong in their policymaking and that they need to change defective assessments and regulations.” Clearly, Wilson expects government to behave like science – or rather, like science is ideally supposed to behave, since science itself does not live up to its own high standards of objectivity and honesty. Economists are not nearly that naïve.

The Riskless Society

Where did the EPA’s standard of no more than one premature death per million exposed per statistical lifetime come from? “Well, let’s face it,” the late Aaron Wildavsky quipped, “no real man tells his girlfriend that she is one in a hundred thousand.” Actually, Wildavsky observes, “the real root of ‘one in a million’ can be traced to the [government’s] efforts to find a number that was essentially equivalent to zero.” Lest the reader wonder whether Wilson and Wildavsky are peculiar in their insistence that this “zero-risk” standard is ridiculous, we have it on the authority of John D. Graham, former director of the Harvard School of Public Health’s Center for Risk Analysis, that “No one seriously suggested that such a stringent risk level should be applied to a[n already] maximally exposed individual.”

Time has also been unkind to the rest of EPA’s methodological assumptions. Linear cancer causation has given way to recognition of a threshold up to which exposure is harmless or even beneficial. This gibes with the findings of toxicology, in which the time-honored first principle is “the dose makes the poison.” It makes it next-to-impossible to gauge safe levels of exposure using either tests on lab animals or experience with low levels of human exposures. As Wildavsky notes, it also helps explain our actual experience over time, in which “health rates keep getting better and better while government estimates of risk keep getting worse and worse.”

During his lifetime, political scientist Aaron Wildavsky was the pioneering authority on government regulation of human risk. In his classic article “No Risk is the Highest Risk of All,” The American Scientist, 1979, 67 (1) 32-37) and his entry on the “Riskless Society” in the Fortune Encyclopedia of Economics (1993, pp. 426-432), Wildavsky produced the definitive reply to the regulatory mentality that now grips America in a vise.

Throughout mankind’s history, human advancement has been spearheaded by technological innovation. This advancement has been accompanied by risk. The field of safety employs tools of risk reduction. There are two basic strategies for risk reduction. The first is anticipation. The EPA, and the welfare state in general, tacitly assume this to be the only safety strategy. But Wildavsky notes that anticipation is a limited strategy because it only works when we can “know the quality of the adverse consequence expected, its probability and the existence of effective remedies.” As Wildavsky dryly notes, “the knowledge requirements and the organizational capacities required to make anticipation an effective strategy… are very large.”

Fortunately, there is a much more effective remedy close at hand. “A strategy of resilience, on the other hand, requires reliance on experience with adverse consequences once they occur in order to develop a capacity to learn from the harm and bounce back. Resilience, therefore, requires the accumulation of large amounts of generalizable resources, such as organizational capacity, knowledge, wealth, energy and communication, that can be used to craft solutions to problems that the people involved did not know would occur.” Does this sound like a stringent standard to meet? Actually, it shouldn’t. We already have all those things in the form of markets, the very things that produce and deliver our daily bread. Markets meet and solve problems, anticipated and otherwise, on a daily basis.

Really, this is an old problem in a new guise. It is the debate between central planning – which assumes that the central planners already know everything necessary to plan our lives for us – and free competition – which posits that only markets can generate the information necessary to make social cooperation a reality. Wildavsky has put the issue in political and scientific terms rather than the economic terms that formed the basis of the Socialist Calculation debates of the 1920s and 30s between socialists Oskar Lange and Fred Taylor and Austrian economists Ludwig von Mises and F. A. Hayek. The EPA is a hopelessly outmoded relic of central planning that not only fails to achieve its objectives, but threatens our freedom in the bargain.

In “No Risk is the Highest Risk of All,” Wildavsky utilizes the economic concept of opportunity cost to make the decisive point that by utilizing resources inefficiently to drive one particular risk all the way to zero, government regulators are indirectly increasing other risks. Because this tradeoff is not made through the free market but instead by government fiat, we have no reason to think that people are willing to bear these higher alternative risks in order to gain the infinitesimally small additional benefits of driving the original risk all the way to zero. As a purely practical matter, we can be sure that this tradeoff is wildly unfavorable. The EPA bans an industrial chemical because it does not meet their impossibly high safety standard. Businesses across the nation have to utilize an inferior substitute. This leaves the businesses, their shareholders, employees and consumers poorer, with less real income to spend on other things. Safety is a normal good, something people and businesses spend more on when their real incomes rise and less on when real incomes fall. The EPA’s foolish “zero-risk” regulatory standard has created a ripple effect that reduces safety throughout the economy.

The Proof of the Safety is in the Living

Wildavsky categorically cited the “wealth to health” linkage as a “rule without exception. To get a concrete sense of this transformation in the 20th century, we can consult the U. S. historical life expectancy and mortality tables. In the century between 1890 and 1987, life expectancy for white males rose from 42.5 years to 72.2 years; for non-whites, from 32.54 years to 67.3 years. For white females, it rose from 44.46 years to 78.9 years; for non-white females, from 35.04 years to 75.2 years. (Note, as did Wildavsky, that the longevity edge enjoyed by females over males came to exceed that enjoyed by white males over non-whites.)

Various diseases were fearsome killers at the dawn of the 20th century, but petered out over the course of the century. Typhoid fever killed an average of 26.7 people per 100,000 as the century turned (from 1900-04); by 1980 it had been virtually wiped out. Communicable diseases of childhood (measles, scarlet fever, whooping cough and diphtheria) carried away 65.2 out of every 100,000 people in the early days of the century but, again by 1980, they had been virtually wiped out. Pneumonia used to be called “the old man’s friend” because it was the official cause of so many elderly deaths, which is why 161.5 out of every 100,000 deaths were attributed to it during 1900-04. But this number had plummeted to 22.0 by 1980. Influenza caused 22.8 deaths out of every 100,000 during 1900-04, but the disease was near extinction in 1980 with only 1.1 deaths ascribed to it. Tuberculosis was another lethal killer, racking up 184.7 deaths per 100,000 on average in the early 1900s. By 1980, the disease was on the ropes with a death rate of only 0.8 per 100,000. Thanks to antibiotics, appendicitis went from lethal to merely painful, with a death rate of merely 0.3 per 100,000 people. Syphilis went from scourge of sexually transmitted diseases to endangered-species of same, going from 12.9 deaths per 100,000 to 0.1.

Of the major causes of death, only cancer and cardiovascular disease showed significant increase. Cancer is primarily a disease of age; the tremendous jump in life expectancy meant that many people who formerly died of all the causes listed above now lived to reach old age, where they succumbed to cancer. That is why the incidence of most diseases fell but why cancer deaths increased. “Heart failure” is a default listing for cause of death when the proximate cause is sufficient to cause organ failure but not acute enough to cause death directly. That accounts for the increase in cardiovascular deaths, although differences in lifestyle associated with greater wealth also bear part of the blame for the failure of cardiovascular deaths to decline despite great advances in medical knowledge and technology. (In recent years, this tendency has begun to reverse.)

The activity-linked mortality tables are also instructive. The tables are again expressed as a rate of fatality per 100,000 people at risk, which can be translated into absolute numbers with the application of additional information. By far the riskiest activity is motorcycling, with an annual death rate of 2,000 per 100,000 participants. Smoking lags far behind at 300, with only 120 of these ascribable to lung cancer. Coal mining is the riskiest occupation with 63 deaths per 100,000 participants, but it has to share the title with farming. It is riskier a priori to drive a motor vehicle (24 deaths per 100,000) than to be a uniformed policeman (22 deaths). Roughly 60 people per year are fatally struck by lightning. The lowest risk actually calculated by statisticians is the 0.000006 per 100,000 (six-millionths of one percent) risk of dying from a meteorite strike.

It is clear that risk is not something to be avoided at all cost but rather an activity that provides benefits at a cost. Driving, coal mining and policing carry the risk of death but also provide broad-based benefits not only to practitioners but to consumers and producers. Industrial chemicals also provide widespread benefits to the human race. It makes no sense to artificially mandate a “one in a million” death-risk for industrial solvents when just climbing in the driver’s seat of a car subjects each of us to a risk that is hundreds of thousands of times greater than that. We don’t need all-powerful government pretending to regulate away the risk associated with human activities while actually creating new hidden risks. We need free markets to properly price the benefits and costs associated with risk to allow us to both efficiently run risks and avoid them.

This fundamental historical record has been replicated with minor variations across the Western industrial landscape. It was not achieved by heavy-duty government regulation of business but by economic growth and markets, which began to slow as the welfare state and regulation began to predominate. Ironically, recent slippage in health and safety has been associated with the transfer of public distrust from government – where it is well-founded – to science. Falling vaccination rates has produced revival of diseases, such as measles and diphtheria, which had previously been nearly extinct.

The Jaundiced Eye of Economists

If there is any significant difference in point of view between scientists (Wilson) and political scientists (Wildavsky) on the one hand, and economists on the other, it is the willingness to take the good faith of government for granted. Wilson apparently believes that government regulators can be made to see the error of their ways. Wildavsky apparently viewed government regulators as belonging to a different school of academic thought (“anticipation vs. resilience”) – maybe they would see the light when exposed to superior reasoning.

Economists are more practical or, if you like, more cynical. It is no coincidence that government regulatory agencies do not practice good science even when tasked to do so. They are run by political appointees and funded by politicians; their appointees are government employees who are paid by political appropriations. The power they possess will inevitably be wielded for political purposes. Most legal cases are settled because they are too expensive to litigate and because one or both parties fear the result of a trial. Government regulatory agencies use their power to bully the private sector into acquiescence with the political results favored by politicians in power. Private citizens fall in line because they lack the resources to fight back and because they fear the result of an administrative due process in which the rules are designed to favor government. This is the EPA as it is known to American businesses in their everyday world, not as it exists in the conceptual realities of pure natural science or academic political science.

The preceding paragraph describes a kind of bureaucratic totalitarianism that differs from classical despotism. The despot or dictator is a unitary ruler, while the bureaucracy wields a diffused form of absolute power. Nevertheless, this is the worst outcome associated with the EPA and top-down federal-government regulation in general. The risks of daily life are manageable compared to the risks of bad science dictated by government. And both these species of risk pale next to the risk of losing our freedom of action, the very freedom that allows us to manage the risks that government regulation does not and cannot begin to evaluate or lessen.

The EPA is just too risky to have around.

DRI-270 for week of 10-6-1: How is Job Safety Produced?

An Access Advertising EconBrief:

How is Job Safety Produced?

The best-selling book on economics in the 20th century was probably Free to Choose, the 1980 defense of free markets by Milton and Rose Friedman. It contained a chapter entitled, “Who Protects the Worker?” In it, the authors highlighted the tremendous improvement in the working conditions and living standards of workers from the Industrial Revolution onward. What, they inquired rhetorically, accounted for this? The Friedmans suggested “labor unions” and “government” as the likely top two answers to any poll taken on this subject.

One of the nation’s leading experts on the subject of risk and safety is W. Kip Viscusi, long an economics professor at Harvard, Duke and Vanderbilt universities and now affiliated with the Independent Institute. In an essay on “Job Safety” for the Fortune Encyclopedia of Economics, Viscusi wrote: “Many people believe that employers do not care whether their workplace conditions are safe. If the government were not regulating job safety, they contend, workplaces would be unsafe.”

The Friedmans and Viscusi knew something that the general public doesn’t know about job safety; namely, that free markets and competition are what keep workers safe. The notion of a “market” for risk or safety seems hopelessly abstruse to most people. The general attitude toward competition can best be described as ambivalent. Still, it is the job of economists to make the complex understandable. Herewith an explanation of how job safety is really produced.

Compensating Differentials

Most people seem comfortable with the fact that wage differentials exist between jobs. Moreover, the direction of difference is not random. Different types of manual labor may differ radically in the element of physical risk to which workers are subjected – coal mining, for example, presents a much higher probability of death or severe injury than does loading-dock work. The greater the risk associated with an employment, the higher the wage its workers will command.

In free markets, wages result from the interaction of supply and demand. Does the “risk differential” reflect variations in the supply of labor or the demand for it? Either or both. The toll of current and future mortality in coal mining – from accidents and “black-lung” disease, respectively – tends to restrict the supply of labor to the profession, driving up miners’ wages on that account alone. Coal’s high-BTU content makes a miner’s output from a 40-hour workweek much more valuable than that of the dock worker, so the demand for coal miners exceeds that for dock workers – a factor also tending to drive miners’ wages above those of dock workers.

This logic extends to other characteristics of employment, beyond those ordinarily associated with risk. Library work is viewed as pleasant because of its low-key, low-stress, peaceful character and attractive environment. This attracts a plentiful supply of applicants for low-rung library jobs (pages and assistants) and the continuous pursuit of graduate degrees in library science (required for librarians). This bountiful supply of labor tends to depress wages within libraries below those of comparable other jobs, such as clerks, cashiers, tellers and such. The particular attractions of library work influence people to accept lower wages than would otherwise be acceptable – in effect, library employees receive part of their payment in kind rather than in cash.

Economists use the term compensating differentials as shorthand to denote and explain differences in work-related remuneration that compensate for differences in how different jobs are perceived or experienced. The phenomenon was first observed and categorized by the great Adam Smith in 1776 in his magnum opus, An Enquiry into the Nature and Causes of the Wealth of Nations. Smith observed that positive wage differences would exist for occupations that were dirty or unsafe, such as coal mining or butchering, those that were odious, such as performing executions, and those that were difficult to learn.

Compensating differentials play a key role in job safety. Opponents of markets – who tend to be the same people promoting government regulation of job safety – insist that employers are too parsimonious to spend money on job safety. And why should they? From the employer’s standpoint, the anti-market man maintains, expenditures on job safety are a waste because they are a cost that does not contribute to the employer’s revenue. The compensating differentials argument supplies a potential motivation for the employer’s investment in safety. A safer workplace will increase the worker’s willingness to accept lower wages, thus allowing the employer to recoup his investment over time, just as if the investment allowed him to earn more revenue.

This positive incentive may have a negative counterpart as well. The ability of workers to sue for tort injury provides an incentive to improve worker safety. (In this regard, Viscusi makes a vital distinction: Firms must correctly understand and anticipate liability in order to feel this incentive. The most famous tort liability case was the massive asbestos liability case, in which longtime principles of tort liability were overturned in order to find large companies like Johns Manville liable for worker illnesses contracted many years before the link between asbestos and mesothelioma was uncovered.)

The Market for Job Safety

The most common way of assessing risk is to calculate the approximate rate of death or injury per annum. For example, a job requiring physical labor entailing moderate risk might result in one death per 10,000 workers per year. Workers in this employment should expect to earn a modest premium – perhaps $500 – $700 per year – over workers doing labor involving essentially no risk of death. Another way to view this premium would be to call it the amount that workers would willingly give up to avoid the risk they bear.  And this amount also sets a ceiling on what employers would pay to improve jobsite safety, since any amount below this will save the employer money, while any amount above it will cost more in safety expenditures than the amount the employer could save in avoided wage premia.

The market for safety is one in which workers assess the risk characteristics of jobs they contemplate. Their assessment determines their willingness to work at that job and the wage at which they will work. It is obvious that the successful functioning of this market demands that workers correctly assess a job’s risk/safety profile.

“How well does the safety market work?” Viscusi asks rhetorically. “For it to work well, workers must have some knowledge of the risks they face. And they do.“[emphasis added] He cites one study showing that 496 workers correctly paired a higher risk of injury with a higher level of danger in their industry. Only 24% of workers in women’s outerwear manufacturing and communications equipment characterized their industry as “dangerous.” But 100% of workers in logging and meat products described their industry as dangerous – correctly, as it turned out.

Are workers ever wrong about the risks they face? Well, they sometimes mis-estimate the level of risk they face, not by assuming it to be zero but by wrongly assuming to be higher or lower than it is. But the evidence strongly suggests that the market does work.

Another datum supporting this conclusion is the general reduction in job risk throughout the 20th century. As real income rose throughout the century, we would expect that workers would take some of their gains in the form of risk reduction; that is, they would deliberately seek out less job risk because the increase in real wages allows them this luxury. In effect, this implies that safety (or risk-reduction) is a normal good, something workers choose to “purchase” more of when their real incomes rise. In fact, that is exactly what did happen over time. Real wages roughly tripled from 1933 to 1970 and average death rates on the job fell from about 37 per 1,000 workers to about 18.

Still another aspect of the market for safety is the incentive it provides to learn. This applies to both employee and employer. Do workers keep track of new information developed about job-related safety hazards? Yes; the evidence of this is the high quit-rate (about 33%) for workers to learn that job risk has risen since their initial hire. Since the hiring and training process is expensive for employers, this represents an incentive for them to hold down those risks.

Government Regulation as a Way to Improve Job Safety

To Americans under forty years of age, it must seem as though the federal government has always been omnipresent in economic life. Actually, the bulk of federal-government regulation is the legacy of two historical periods – the New Deal administration of President Franklin Roosevelt from 1932-1945 and the Great Society regime of President Lyndon Johnson from 1963-1968. Most of the non-financial regulatory apparatus, including the agencies dealing with health and safety, were created in the late 60s and early 70s. The publicity created by the muckraking exposes of consumer activist Ralph Nader played a key role in stimulating the implementing legislation for these agencies. (Viscusi’s career began with the two years he spent as an apprentice in the Nader organization prior to his academic training.)

In 1970, the federal Occupational Safety and Health Act created the agency called OSHA (Occupational Safety and Health Administration). The agency is an attempt to engineer a theoretically safe workplace and implement it by government fiat. This de-glamorized mission statement highlights the agency’s glaring flaw: the substitution of technological criteria for purposes of solving economic problems. OSHA’s attempt to ban formaldehyde from the workplace in 1987 resulted in rulemakings that were estimated to cost $72 billion for each life they purported to save. To add insult to this grievous injury to economic logic, the U.S. Supreme Court ruled that OSHA regulations could not be subjected to any cost-benefit test, thus enshrining the agency’s right to commit acts of fiscal and economic lunacy with apparent impunity.

It seems difficult to believe that the judges could not envision the possibility that the $72 billion committed to saving that life had alternative uses that included saving multiple other lives. Yet the idea that the Constitution should codify any kind of respect for economic logic remains outside the legal mainstream to this day, despite the efforts of scholarly judges like Richard Posner and Frank Easterbrook to bring their substantial economic learning to bear.

Viscusi notes that “increases in safety from OSHA’s activities have fallen short of expectations. According to some economists’ estimates, OSHA’s regulations have reduced workplace injuries by at most 2 to 4%.” He compares the fines OSHA collects in the average year (about $10 million at the time Viscusi wrote) to the size of the aggregate risk premium embedded in U.S. wages (about $120 billion at that point). Obviously, the market for safety was disciplining employers and employees alike much more powerfully than OSHA.

As the Friedmans pointed out, though, “government does protect one class of workers very well; namely, those employed by government.” Government employees have job security and incomes linked to the cost of living. Their civil-service retirement pensions are also indexed to inflation and superior to anything available from the Social Security system most Americans are tied to by law. Those government employees who retire early enough to log enough quarters of private-sector employment to qualify for Social Security benefits can “double-dip” from the government pension trough. Needless to say, this is not exactly the concept that OSHA, et al, were designed to further.

Labor Union Bargaining

The role of labor unions in securing improvements in job safety is limited to whatever provisions the union might succeed in embedding into negotiated labor contracts. Unions cannot add to the market incentives to improve safety – incentives that would exist whether unions existed or not. Indeed, if anything, the opposite is true.

Unions can succeed in raising the wage received by their members. They do this either by restricting the supply of labor by limiting the legal supply of workers to union members, or by legally bargaining for a wage higher than the one that would otherwise prevail in a free market. Either way, the result of this above-market wage is unemployment of labor. To the extent that workers leave the unionized industry for employment elsewhere, the higher unionized wages are counterbalanced by lower wages elsewhere.

The wage premium for risk will represent a lower fraction or percentage of the higher, unionized wage than of the market-level wage. Thus, labor unions dilute or lessen the impact of wage premia in creating job safety for workers.

Risk Compensation

The most powerful development in the economics of risk and safety over the last four decades has been the recognition of risk compensation behavior as an offset to rulemaking by government. In the early 1960s, University of Chicago economist Sam Peltzman began to investigate federal-government automotive safety laws designed to force automobile companies to add safety equipment to cars.

The laws made no sense to him. He could see that car companies had incentives to add safety improvements to cars, provided customers wanted them – and he didn’t doubt that many consumers did. But he didn’t see why the companies had to be, or should be, forced to do something that that might well be in their own interest anyway or, alternatively, might not make sense at all.

Peltzman’s research, summarized in a now-classic 1975 article in the Journal of Political Economy, found that the safety regulations did not improve safety on net balance. That is, they either failed to improve actual safety or the lives saved or injuries avoided were offset by other lives lost and injuries incurred because of the laws and safety measures taken.

The key overall principle at work was risk compensation. Safety measures like air bags, seat belts and anti-lock brakes made people feel safer. Consequently, the most risk-loving individuals drove faster and incurred more driving risk to offset the death-and-injury risk that had been reduced by the new safety measures and equipment.

Peltzman’s results were initially greeted with massive skepticism. But forty years of research have vindicated them resoundingly. The “Peltzman Effect” is now recognized worldwide by social and physical scientists. It has been verified empirically in research involving motorcycle and bicycle accidents as well as automobile crashes, and in such diverse fields as athletics, children’s play, recreational pursuits like skydiving and fields like insurance and finance.

Really, risk compensation is not nearly as counterintuitive as it seems upon first exposure. The logic of command-and-control government rules is that most people are mindless robots who are incapable of perceiving incentives, let alone acting in their own interest – but who are capable of following rules laid down by government.  Alternatively, they are docile enough to pay fines ad infinitum after racking up violations. The glaring exceptions are government rule makers, who are well-informed and well-intentioned enough to make the rules that the robots are supposed to follow.

Nothing about actual human behavior suggests that human beings conform to this model. Evidence clearly reveals human beings as rational subject to the informational constrains under which they all labor. The idea that we react to the presence of rules that run counter to our predilections is fully consistent with this picture. It is perfectly clear why OSHA’s rules have “fallen short of expectations” – because OSHA failed to realize that when they force people to obey rules against their will, they take happiness away that people will strive to regain. That is true by definition; that is what “against their will” means.

The Common Sense of the Free-Market Approach to Job Safety

In free markets, workers demand a “wage premium” to reflect the degree of danger or “unsafety” they perceive in a job. They don’t “demand” it by walking into an employer’s office and banging on the desk – they don’t have to. They just work only when and where wages rise sufficiently to compensate them for the risk they bear. This voluntary approach allows the amount of work supplied to equal the amount employers seek at the equilibrium market wage. This contrasts with the approach of labor unions, which creates involuntary unemployment by insisting on a bargained, above-market wage and/or working conditions that employers would not voluntarily provide.

The common sense of the wage premium can be expressed in figurative terms: “In our (workers’) opinion, this wage premium reflects the degree of danger – above the norm or average – that we associate with this job. You (the employer) are free to make any safety modifications in the job or working environment that will cost less than this amount (in the aggregate), but beware of spending more than this. Meanwhile, we have freely chosen to accept the currently-existing risks – as we perceive them.”

This provides a framework for efficient improvements in job safety. Without it, we are left with vague, grandiose rhetoric about how “nothing less than absolute safety is tolerable for our workers” or “how would you like your son or daughter to work in such an environment.” That kind of nebulous talk is complete rubbish. Every human being willingly takes risks every day of their lives, consciously or not. One of the most important parts of growing up is learning the risks of everyday life – which ones are necessary, which ones are reasonable and which ones are foolish. It makes no sense whatever to whine that people “shouldn’t have to risk their lives mining coal” so “big corporations can make profits.” Does it make sense to say that people can willingly risk their lives climbing mountains, fighting bulls, racing automobiles, jumping out of planes, fighting fires and so on – but they can’t risk their lives to keep people warm in the winter? Free markets allow the individuals directly concerned – workers, employers and consumers – to gauge the risks and calculate which improvements in safety are worth making and which aren’t. It recruits the people most willing and able to bear risk by offering them a premium for their efforts. It warns the timid by differentiating jobs according to risk – if all jobs paid the same a tedious and dangerous process of trial and error would be required to learn which jobs they should avoid.

Contrast this reasoned, rational approach with that of government regulatory agencies. They substitute their own technological, engineering view of safety for the free-market approach and impose it on the public in the form of command-and-control, one-size-fits-all, take-it-or-leave-the-country rules and regulations. One might reply that engineers have a more informed view of safety than do workers and employers. Yet research by leading experts like W. Kip Viscusi shows that market wage premia closely track technological and ex post statisticalmeasures of risk. And the government approach runs the risk of being skewed by politics; the regulatory agency’s objective studies may be overridden by a determination to please their bosses in the administration or Congressional patrons upon whom their funding depends.

The final word belongs to formal logic, which declares that there is no such thing as a pure engineering optimum in resource allocation. An engineer can determine (say) the optimum output from given inputs into a particular machine, but he or she can never determine the value to place on the inputs or output. Only producers and consumers can do that; that is why we need markets to solve economic problems. The formaldehyde case, noted above, shows the ghastly extremes to which engineers and bureaucracy can go when given free rein.

How is Job Safety Produced?

Our investigation reveals that job safety is produced primarily by free markets through wage premia and voluntary improvements in safety enacted by employers. It is produced much less efficiently, less productively and more haphazardly by government and even less proficiently by the actions of labor unions.

DRI-280 for week of 7-7-13: Unintended Consequences and Distortions of Government Action

An Access Advertising EconBrief:

Unintended Consequences and Distortions of Government Action

The most important cultural evolution of 20th-century America was the emergence of government as the problem-solver of first resort. One of the most oft-uttered phrases of broadcast news reports was “this market is not subject to government regulation” – as if this automatically bred misfortune. The identification of a problem called for a government program tailored to its solution. Our sensitivity, compassion and nobility were measured by the dollar expenditure allocated to these problems, rather than by their actual solution.

This trend has increasingly frustrated economists, who associate government action with unintended consequences and distortions of markets. Since voluntary exchange in markets is mutually beneficial, distortions of the market and consequences other than mutual benefit are bad things. Economists have had a hard time getting their arguments across to the public.

One reason for this failure is the public unwillingness to associate a cause with an effect other than that intended. We live our lives striving to achieve our ends. When we fail, we don’t just shrug and forget it – we demand to know why. Government seems like a tool made to order for our purposes; it wields the power and command over resources that we lack as individuals. Our education has taught us that democracy gives us the right and even the duty to order government around. So why can’t we get it to work the way we want it to?

The short answer to that is that we know what we want but we don’t know how government or markets work, so we don’t know how to get what we want. In order to appreciate this, we need to understand the nature of government’s failures and of the market’s successes. To that end, here are various examples of unintended consequences and distortions.

Excise Taxation

One of the simplest cases of unintended, distortive consequences is excise taxation. An excise tax is a tax on a good, either on its production or its consumption. Although few people realize it, the meaningful economic effects of the tax are the same regardless of whether the tax is collected from the buyer of the good or from the seller. In practice, excise taxes are usually collected from sellers.

Consider a real-world example with purely hypothetical numbers used for expository purposes. Automotive gasoline is subject to excise taxation levied at the pump; e.g., collected from sellers but explicitly incorporated into the price consumers pay. Assume that the price of gas net of tax is $2.00 per gallon and the combination of local, state and federal excuse taxes adds up to $1.00 per gallon. That means that the consumer pays $3.00 per gallon but the retail gasoline seller pockets only $2.00 per gallon.

Consider, for computational ease, a price decrease of $.30 per gallon. How likely is the gasoline seller to take this action? Well, he would be more likely to take it if his total revenue were larger after the price decrease than before. But with the excise tax in force, a big roadblock exists to price reductions by the seller. The $.30 price decrease subtracts 15% from the price (the net revenue per unit) the seller receives, but only 10% from the price per unit that the buyer pays. And it is the reduction in price per unit paid by the buyer that will induce purchase of more units, which is the only reason the seller would have to want to reduce price in the first place. The fact that net revenue per unit falls by a larger percentage than price per unit paid by consumers is a big disincentive to lowering price.

Consider the kind of case that is most favorable to price reductions, in which demand is price-elastic. That is, the percentage increase in consumer purchases exceeds the percentage decrease in price (net revenue). Assume that purchases were originally 10,000 gallons per week and increased to 11,200 (an increase of 12%, which exceeds the percentage decrease in price). The original total revenue was 10,000 x $2.00 = $20,000. Now total revenue is 11,200 x $1.70 = $19,040, nearly $1,000 less. Since the total costs of producing 1,200 more units of output are greater than before, the gasoline seller will not want to lower price if he correctly anticipates this result. Despite the fact that consumer demand responds favorably (in a price-elastic manner) to the price decrease, the seller won’t initiate it.

Without the excise taxation, consumers and seller would face the same price. If demand were price-elastic, the seller would expect to increase total revenue by lowering price and selling more units than before. If the increase in total revenue were more than enough to cover the additional costs of producing the added output, the seller would lower price.

Excise taxation can reduce the incentive for sellers to lower price when it is imposed in specific form – a fixed amount per unit of output. When the excise tax is levied ad valorem, as a percentage of value rather than a fixed amount per unit, that disincentive is no longer present. In fact, the specific tax is the more popular form of excise taxation.

The irony of this unintended consequence is felt most keenly in times of rising gasoline prices. Demagogues hold sway with talk about price conspiracies and monopoly power exerted by “big corporations” and oil companies. Talk-show callers expound at length on the disparity between price increases and price decreases and the relative reluctance of sellers to lower price. Yet the straightforward logic of excise taxation is never broached. The callers are right, but for entirely the wrong reason. The culprit is not monopoly or conspiracy. It is excise taxation.

This unintended consequence was apparently first noticed by Richard Caves of Harvard University in his 1964 text American Industry: Structure, Conduct, Performance.

ObamaCare: The 29’ers and 49’ers

The recent decision to delay implementation of the Affordable Care Act – more familiarly known as ObamaCare – has interrupted two of the most profound and remarkable unintended consequences in American legislative history. The centerpiece of ObamaCare is its health mandates: the requirement that individuals who lack health insurance acquire it or pay a sizable fine and the requirement that businesses of significant size provide health plans for their employees or, once again, pay fines.

It is the business mandate, scheduled for implementation in 2014, which was delayed in a recent online announcement by the Obama administration. The provisions of the law had already produced dramatic effects on employment in American business. It seems likely that these effects, along with the logistical difficulties in implementing the plan, were behind the decision to delay the law’s application to businesses.

The law requires businesses with 50 or more “full-time equivalent” employees to make a health-care plan available to employees. A “full-time-equivalent” employee is defined as any combination of employees whose employment adds up to the full-employment quotient of hours. Full-time employment is defined as 30 hours per week, in contradiction to the longtime definition of 40 hours. Presumably this change was made in order to broaden the scope of the law, but it is clearly having the opposite effect – a locus classicus of unintended consequences at work.

Because the “measurement period” during which each firm’s number of full-time equivalent number of employees is calculated began in January 2013, firms reacted to the provisions of ObamaCare at the start of this year, even though the business mandate itself was not scheduled to begin until 2014. No sooner did the New Year unfold than observers noticed changes in fast-food industry employment. The changes took two basic forms.

First, firms – that is, individual fast-food franchises – cut off their number of full-time employees at no more than 49. Thus, they became known as “49’ers.” This practice was obviously intended to stop the firm short of the 50-employee minimum threshold for application of the health-insurance requirement under ObamaCare. At first thought, this may seem trivial if highly arbitrary. Further thought alters that snap judgment. Even more than foods, fast-food firms sell service. This service is highly labor-intensive. An arbitrary limitation on full-time employment is a serious matter, since it means that any slack must be taken up by part-timers.

And that is part two of the one-two punch delivered to employment by ObamaCare. Those same fast-food firms – McDonald’s, Burger King, Wendy’s, et al – began limiting their part-time work force to 20 hours per week, thereby holding them below the 30-hour threshold as well. But, since many of those employees were previously working 30 hours or more, the firms began sharing employees – encouraging their employees to work 20-hour shifts for rival firms and logging shift workers from those firms on their own books. Of course, two 20-hour shifts still comprises (more than) a full-time-equivalent worker, but as long as the total worker hours does not exceed the 1500-hour weekly total of 50 workers at 30 hours, the firm will still escape the health-insurance requirement. Thus were born the “29’ers” – those firms who held part-time workers below the 30-hour threshold for full-time-equivalent employment.

Are the requirements of ObamaCare really that onerous? Politicians and left-wing commentators commonly act as if health-insurance were the least that any self-respecting employer could provide any employee, on a par with providing a roof to keep out the rain and heat to ward off freezing cold in winter. Fast-food entrepreneurs are striving to avoid penalties associated with hiring that 50th full-time-equivalent employee. The penalty for failing to provide health insurance is $2,000 per employee beginning with 30. That is, the hiring of the 50th employee means incurring a penalty on the previous 20 employees, a total penalty of $40,000. Hiring (say) 60 employees would raise the penalty to $60,000.

A 2011 study by the Hudson Institute found that the average fast-food franchise makes a profit of $50,000-100,000 per year. Thus, ObamaCare penalties could eat up most or all of a year’s profit. The study’s authors foresaw an annual cost to the industry of $6.4 billion from implementation of ObamaCare. 3.2 million jobs were estimated to be “at risk.” All this comes at a time when employment is painfully slow to recover from the Great Recession of 2007-2009 and the exodus of workers from the labor force continues apace. Indeed, it is just this exodus that keeps the official unemployment rate from reaching double-digit heights reminiscent of the Great Depression of the 1930s.

Our first distortion was an excise tax. The ObamaCare mandates can also be viewed as a tax. The business mandates are equivalent to a tax on employment, since their implementation and penalties are geared to the level of employment. The Hudson study calculated that, assuming a hypothetical wage of $12 per hour, employing the 50th person would cost the firm $52 per hour, of which only $12 was paid out in wages to the employee. The difference between what the firm must pay out and what the employee receives is called “the wedge” by economists, since it reduces the incentive to hire and to work. The wider the wedge, the greater the disincentive. Presumably, this is yet another unintended consequence at work.

ObamaCare is a law that was advertised as the solution to a burgeoning, decades-old problem that threatened to engulf the federal budget. Instead, the law itself now threatens to bring first the government, then the private economy to a standstill. In time, ObamaCare may come to lead the league in unintended consequences – a competition in government ineptitude that can truly be called a battle of the all-stars.

The Food Stamp Program: An Excise Subsidy

In contrast to the first two examples of distortion, the food-stamp program is not a tax but rather its opposite number – a subsidy. Because food stamps are a subsidy given in-kind instead of in cash – a subsidy on a good in contrast to a tax on a good – they are an excise subsidy.

Food stamps began in the 1940s as a supplement to agricultural price supports. Their primary purpose was to dispose of agricultural surpluses, which were already becoming a costly nuisance to the federal government. Their value to the poor was seen as a coincidental, though convenient, byproduct. Although farmers and the poor have long since exchanged places in the hierarchy of beneficiaries, vestiges of the program’s lineage remain in its residence in the Agriculture Department and the source of its annual appropriations in the farm bill. (Roughly 80% of this year’s farm bill was given over to monies for the food-stamp program, which now reaches some 47.3 million Americans, or 15% of the population.)

The fact that agricultural programs help people other than their supposed beneficiaries is not really an example of unintended consequences, since we have known from the outset that price supports, acreage quotas, target prices and other government measures harm the general public and help large-scale farmers much more than small family farmers. The unintended consequences of the food-stamp program are vast, but they are unrelated to its tenuous link to agriculture.

Taxes take real income away from taxpayers, but – at least in principle – they fund projects that ostensibly provide compensating benefits. The unambiguous harm caused by taxes results from the distortions they create, which cause deadweight losses, or pure waste of time, effort and resources. Subsidies, the opposite number of taxes, create similar distortions. The food stamp program illustrates these distortions vividly.

For many years, program recipients received stamp-like vouchers entitling them to acquire specified categories of foodstuffs from participating sellers (mostly groceries). The recipient exchanged the stamps for food at a rate of exchange governed by the stamps’ face value. Certain foods and beverages, notably beverage alcohol, could not be purchased using food stamps.

Any economist could have predicted the outcome of this arrangement. A thriving black market arose in which food stamps could be sold at a discount to face value in exchange for cash. The amount of the discount represented the market price paid by the recipient and received by the broker; it fluctuated with market conditions but often hovered in the vicinity of 50% (!). This transaction allowed recipients to directly purchase proscribed goods and/or non-food items using cash. The black-market broker exchanged the food stamps (quasi-) legally at face value in a grocery in exchange for food or illegally at a small discount with a grocery in exchange for cash. (In recent years, bureaucrats have sought to kill off the black market by substituting a debit card for the stamp/vouchers.)

The size of the discount represents the magnitude of the economic distortion created by giving poor people a subsidy in excise form rather than in cash. Remarkably, large numbers of poor people preferred cash subsidies to markedly that $.50 in cash was preferred to $1.00 worth of (government-approved) foodstuffs. This suggests that a program of cash subsidies could have made recipients better off while spending around half as much more money on subsidies and dispensing with most of the large administrative costs of the actual food-stamp program.

Inefficiency has been the focus of various studies of the overall welfare system. Their common conclusion has been that the U.S. could lift every man, woman and child above the arbitrary poverty line for a fraction of our actual expenditures on welfare programs simply by giving cash to recipients and forgoing all other forms of administrative endeavor.

Of course, the presumption behind all this analysis is that the purpose of welfare programs like food stamps is to improve the well-being of recipients. In reality, the history of the food-stamp program and everyday experience suggests otherwise – that the true purpose of welfare programs is to improve the well-being of donors (i.e., taxpayers) by alleviating guilt they would otherwise feel.

The legitimate objections to cash subsidy welfare programs focus on the harm done to work incentives and the danger of dependency. The welfare reform crafted by the Republican Congress in 1994 and reluctantly signed by President Clinton was guided by this attitude, hence its emphasis on work requirements. But the opposition to cash subsidies from the general public, all too familiar to working economists from the classroom and the speaking platform, arises from other sources. The most vocal opposition to cash subsidies is expressed by those who claim that recipients will use cash to buy drugs, alcohol and other “undesirable” consumption goods – undesirable as gauged by the speaker, not by the welfare recipient. The clear implication is that the food-stamp format is a necessary prophylactic against this undesirable consumption behavior by welfare recipients, the corollary implication being that taxpayers have the moral right to control the behavior of welfare recipients.

Taxpayers may or may not be morally justified in asserting the right to control the behavior of welfare recipients whose consumption is taxpayer-subsidized. But this insistence on control is surely quixotic if the purpose of the program is to improve the welfare of recipients. And, after all, isn’t that what a “welfare” program is – by definition? The word “welfare” cannot very well refer to the welfare of taxpayers, for then the program would be a totalitarian program of forced consumption run for the primary benefit of taxpayers and the secondary benefit of welfare recipients.

The clinching point against the excise subsidy format of the food-stamp program is that it does not prevent recipients from increasing their purchases of drugs, alcohol or other forbidden substances. A recipient of (say) $500 in monthly food stamps who spends $1,000 per month on (approved) foodstuffs can simply use the food stamps to displace $500 in cash spending on food, leaving them with $500 more in cash to spend on drugs or booze. In practice, a recipient of a subsidy will normally prefer to increase consumption of all normal goods (that is, goods whose consumption he or she increases when real income rises). Any excise subsidy, including food stamps, will therefore be inferior to a cash subsidy for this reason. In terms of economic logic, an excise subsidy starts out with three strikes against it as a means of improving a recipient’s welfare.

So why do multitudes of people insist on wasting vast sums of money in order to make people worse off, when they could save that money by making them better off? The paradox is magnified by the fact that most of these money-wasters are politically conservative people who abhor government waste. The only explanation that suggests itself readily is that by wasting money conspicuously, these people relieve themselves of guilt. They are no longer troubled by images of poor, hungry downtrodden souls. They need feel no responsibility for enabling misbehavior through their tax payments. They have lifted a heavy burden from their minds.

The Rule, Not the Exception

These common themes developed by these examples are distortion of otherwise-efficient markets by government action and unintended consequences resulting from the government-caused distortions. By its very nature, government acts through compulsion and coercion rather than mutually beneficial voluntary exchange. Consequently, distortions are the normal case rather than the exception. Examples such as those above are not exceptions. They are the normal case.